Locked Logins Query

The following query uses the status bits in master..syslogins to identify locked logins

select name from master..syslogins where status&2=2
go



The various status bits in syslogins correspond to the below, to find them in syslogins simply substitute 2 above with the relevant number:

2 – Account is locked.
4 – Password has expired. This is set on the user’s first login attempt after expiration.
8 – Indicates that the value of exempt inactive lock is set to TRUE. It is not applicable for login profile rows.
16 – OMNI:autoconnect mode is enabled.
32 – May use SAP ASE internal authentication mechanism – syslogins.
64 – May use LDAP external authentication.
128 – May use PAM external authentication.
256 – May use Kerberos external authentication.
512 – Indicates a login profile.
1536 – Indicates a default login profile.
2048 – Indicates an authentication mechanism specified in a login profile.

Migrate Syslogins etc from 15.7 to 16

There are actually 3 tables which will need to be copied over and synced up, they are syslogins, syssrvroles and sysloginroles.

The below steps will vary a bit but for example if you are upgrading from Sybase version 12 to 15 then you would do the following steps, for other ASE versions you might need to change the temp table a bit:

Source Server

The first step is to bcp out these tables (syslogins, syssrvroles and sysloginroles) from the source Sybase server and copy the files over to the destination.

e.g bcp master..syslogins out syslogins.out -U<Username> -S<Servername> -n -X

From destination server scp the files across e.g.

scp zkc3yit@gbzzyxsyordad02.gbcaydc.baml.com:/tmp/t/* /tmp/t

.

.

.

Destination Server

Syslogins

  • Create the temporary syslogins table on the new server
    sp_configure ‘allow updates to system tables’,1
    go
    USE tempdb
    go
    /*drop table dbo.temp_syslogins
    go
    */
    CREATE TABLE dbo.temp_syslogins
    (
    suid        int             NOT NULL,
    status      smallint        NOT NULL,
    accdate     datetime        NOT NULL,
    totcpu      int             NOT NULL,
    totio       int             NOT NULL,
    spacelimit  int             NOT NULL,
    timelimit   int             NOT NULL,
    resultlimit int             NOT NULL,
    dbname      sysname          NULL,
    name        sysname     NOT NULL,
    password    varbinary(130)   NULL,
    language    varchar(30)     NULL,
    pwdate      datetime        NULL,
    audflags    int             NULL,
    fullname    varchar(30)     NULL,
    srvname     varchar(30)     NULL,
    logincount  smallint        NULL,
    procid      int             NULL,
    lastlogindate datetime      NULL,
    crdate      datetime        NULL,
    locksuid    int             NULL,
    lockreason  int             NULL,
    lockdate    datetime        NULL,
    lpid           int            NULL,
    crsuid         int            NULL
    )
    LOCK ALLPAGES
    Go


    You may need to tweak the column types depending on which version of Sybase you are importing from, check their source syslogins table to double check.
  • Now bcp in the logins which was bcp’d out on the old Sybase server
    bcp tempdb..temp_syslogins in syslogins.out -Usa -P -SMICOY2 -n -X

  • Make a note of the highest suid currently in master..syslogins

  • Check to see which logins already exist in master..syslogins and identify any clashes
    select s.suid, t.* from tempdb..temp_syslogins t, master..syslogins s
    where t.name =s.name
    go


    Make a careful note where there is a discrepancy in suid’s. If any entry in source has a higher suid that the current highest suid in master..syslogins (e.g. if the highest current suid is 7 and ipgbmsed2 is 7 in master..syslogins but 8 in temp-syslogins) then you need to make a note to remove any special privileges away from suid 8 in temp_sysloginroles later on, if it has any. If the suid is within the current highest suid’s then it will be removed automatically in the delete before the insert, so just ignore.

    also run

    select s.name, t.name, t.*
    from tempdb..temp_syslogins t, master..syslogins s
    where t.suid =s.suid
    and t.name not in (select name from master..syslogins)
    go


    Here if you have a situation where an SUID in source clashes with a new one on the destination (i.e they have different names and the source name does not exist in destination) then you will need to increment the suid number in the temp syslogins and also temp sysloginroles table later on.

  • From the second part above Increase the suid of any source login which clashed with destination
    update temp_syslogins
    set suid=3006
    where suid=6
    go


  • Delete existing logins which match by name
    delete from tempdb..temp_syslogins
    where name in (
    select t.name from tempdb..temp_syslogins t, master..syslogins s
    where t.name =s.name
    )
    go

  • Now copy the syslogins over to the master..syslogins table
    insert into master..syslogins
    select * from tempdb..temp_syslogins
    go

.

.

Syssrvroles

  • Create the temp_syssrvroles table
    USE tempdb
    go
    CREATE TABLE tempdb..temp_syssrvroles
    (
        srid       int            NOT NULL,
        name       varchar(30)    NOT NULL,
        password   varbinary(128) NULL,
        pwdate     datetime       NULL,
        status     smallint       NULL,
        logincount smallint       NULL,
         locksuid int            NULL,
    lockreason  int           NULL,
    lockdate   datetime       NULL
    )
    go
  • bcp in the entries from source
    bcp tempdb..temp_syssrvroles in c1p16_sysroles.out -Usa -P -SMICOY2 -n -X
  • Check to see if there are any clashing srid’s in the source compared to destination, it should be ok from ASE15.7 to 16
    Select s.srid as master_srid, s.name as master_name,
    t.srid as source_srid, t.name as source_name, t.*
    from tempdb..temp_syssrvroles t, master..syssrvroles s
    where t.srid=s.srid
    go

    IF there are any clashes from above, which from 15.7 to 16 there should not be, then Update any srid’s which conflict with existing srid’s in master..sysrvroles e.g.
    update tempdb..temp_syssrvroles
    set srid= 133 where srid=33
    go


  • Check to see if there are clashing role names in the source to destination
    Select s.srid as master_srid, s.name as master_name,
    t.srid as source_srid, t.name as source_name, t.*
    from tempdb..temp_syssrvroles t, master..syssrvroles s
    where t.name =s.name
    go

    If there are any clashes from the above query then we delete the srid from the temp table, we will also later need to update sysloginroles to use the srid which already exists in master.
    delete from temp_syssrvroles
    where srid=34
    go

  • Delete the srvroles which already exist in master from the temp table.
    delete from tempdb..temp_syssrvroles
    where srid <=33 –Biggest srid currently in master..syssrvroles is 33
    go

  • Copy the roles into master.syssrvroles
    insert into master..syssrvroles
    select * from tempdb..temp_syssrvroles
    go

.

.

Sysloginroles

  • Create the sysloginroles table
    USE tempdb
    go
    CREATE TABLE dbo.temp_sysloginroles
    (
    suid   int      NOT NULL,
    srid   int      NOT NULL,
    status smallint NOT NULL,
    predid int      NULL
    )
    Go

  • BCP in the entries from source
    bcp tempdb..temp_sysloginroles in c1p16_sysloginroles.out -Usa -P -SMICOY2 -n -X

  • Update the same suid as you did for syslogins
    update tempdb..temp_sysloginroles
    set suid = 3006
    where suid=6
    go


  • Delete any suid’s from temp_sysloginroles which already existed at the start in master..syslogins and master..sysloginroles
    delete from tempdb..temp_sysloginroles
    where suid <=8 –highest suid recorded earlier
    go

  • Delete any suid’s from temp_sysloginroles which were previously a mismatch between master..syslogins and temp_syslogins
    delete from tempdb..temp_sysloginroles
    where suid in (10)
    go
  • Update any srid number which you had to update previously in the syssrvroles step e.g.
    update tempdb..temp_sysloginroles
    set srid= 133
    where srid=33 — Updated srid from tempdb..syssrvroles
    go

  • If there were any incidents of the same role name having different srid’s in syssrvroles (the second check) then you need to update the srid:
    update temp_sysloginroles
    set srid=33
    where srid=34
    go

  • Delete any rows from temp_sysloginroles which doesn’t have a matching suid in master..syslogins
    delete from tempdb..temp_sysloginroles

    where suid not in (select suid from master..syslogins)

  • Make sure everything looks ok
    select * from tempdb..temp_sysloginroles
    go
  • Now Copy the roles into master.sysloginroles
    insert into master..sysloginroles
    select * from tempdb..temp_sysloginroles
    go

  • Remove Update to system tables
    sp_configure “allow updates to system tables”, 0
    go

.

.

.

Synchronize Logins

The next steps relate to synchronizing the suids after you have loaded the old database into the new server.

  • First thing is to enable updates to system catalogs
    sp_configure ‘allow updates to system tables’,1
    go


  • Then if you had to alter/increment any suid’s in syslogins earlier then you will also need to do this in the sysusers for any database that has suid of e.g 6:
    update sysusers set suid=3006
    where suid=6
    go


  • Next check whether there are any aliases set up without matching entries in syslogins:
    select * from sysalternates
    where suid not in (select suid from master..syslogins);
    go

    If you find any entries which do not match with anything in syslogins then they should be deleted from sysalternates:
    delete from sysalternates
    where suid not in (select suid from master..syslogins)
    go

  • The next check is to find situations where the same name in syslogins and sysusers has a different suid.
    select l.suid ‘Login suid’,u.suid ‘User suid’,l.name ‘Login name’, u.name ‘User name’
    from
    master..syslogins l, sysusers u
    where l.name = u.name and l.suid != u.suid and u.name != “dbo”
    go

    If this finds any mismatches then the following query will resync the suids in the user database with the suid’s in syslogins.
    update sysusers
    set u.suid = l.suid
    from sysusers u, master..syslogins l
    where l.name=u.name
    go

  • A follow on check is to see whether there are any users who have a different name to their syslogins name but with the same suid’s.
    select l.suid,u.suid,l.name, u.name from master..syslogins l, sysusers u
    where l.suid = u.suid and l.name != u.name and u.name != “dbo”
    go

    if that returns some rows then they need to be deleted from sysusers with the following query:
    delete from sysusers
    where suid in (select suid from master..syslogins)
    and name not in (select name from master..syslogins)
    and name != “dbo”
    go

  • You should also check for any users in sysusers which do not match anything in syslogins
    select * from sysusers u
    where u.suid not in (select suid from master..syslogins)
    and name != “dbo”
    and suid > 0
    go

    These can be removed with
    delete from sysusers
    where suid not in (select suid from master..syslogins)
    and name != “dbo”
    and suid > 0
    go

  • Disable updates to system tables
    sp_configure ‘allow updates to system tables’,0
    go


Restoring the Master Database

To restore a failed master database, perform the following steps:

  1. Run dataserver with the –b and –d options to create a new master device:
    dataserver -d /dev/rsd1b –b 100M
  2. Start SAP ASE in single-user mode:
    startserver -f RUN_SYBASE –m
  3. You can use the master database, but presently it doesn’t have any references
    to any user databases or sybsystemprocs.
    If master was expanded beyond its default size, run alter database to extend it.
  4. Update the backup server name in sysservers if it isn’t SYB_BACKUP:
    begin transaction
    update sysservers set srvnetname = “PROD_BACKUP”
    where servname = “SYB_BACKUP”
    commit transaction
  5. Load the backup of the master database; after the load completes, it will shut
    down SAP ASE automatically:
    load database master from “/full/path/name/master.dump”
  6. Start SAP ASE in single-user mode, and check that all databases/devices/logins
    appear to be restored correctly.
  7. If everything appears okay, you’re nearly done. Shut down and restart SAP ASE
    normally, and skip to step 9.
  8. If everything doesn’t appear okay, you have more work to do. If additional
    devices, databases, logins, or users of master were created after the last backup
    of master, you’ll need to re-create them.
    SAP ASE provides two commands to recover existing devices and databases to
    a restored master database:
    – disk reinit
    – disk refit

    Use disk reinit to recover devices created since the last dump of master; disk
    reinit
    restores information to sysdevices without reinitializing the device,
    which retains the data which is on the device. Values supplied to disk reinit
    should match values supplied to the original disk init command.

    The syntax for disk reinit follows:
    disk reinit name = logical_device_name,
    physname = physical_device_name,
    vdevno = virtual_device_number,
    size = number_of_pages | K | G | T

    disk refit is used after running disk reinit to rebuild the sysusages and sysdatabases tables in master from the database fragments found on the restored devices.
  1. Assuming you need to run it, after executing disk refit, SAP ASE will automatically
    shut down the server. You should restart SAP ASE in single-user mode
    and verify that all databases are properly restored and are the correct sizes. Run
    dbcc checkalloc() on all recovered databases. If everything appears okay, shut
    down and restart SAP ASE normally. Any changes or additions to logins, configuration
    options, remote servers, remote logins, and roles will still need to be
    re-created.

    The moral of the story is to keep frequent backups of the master database! Best
    practice is to schedule a nightly backup of the master database. In addition, back
    up the master database after each command that affects disks, storage, databases,
    segments, logins, roles, and so on.

Sybase Load / Restore – Autofit

Since ASE15.7 SP100 you don’t need to worry about creating a database in the same sequence of data and log segments when loading a dump from another database, this is something which you had to do previously.

Now the only consideration is that you have enough space for data device and log device. So for example if you are loading from a database which was created in this sequence; Data 5, Log 5, Log 5, Data 15, Log 5 and finally Data 10. then all you need to do now is;
Create database dbload with data on datadevice=”30M” and log on logdevice=”15M”
and then load the database, the load then sorts the fragments into the correct order automatically.

Sybase Cumulative Dump / Backup

Sybase has a relatively new backup called cumulative which backs up all the changes in the database since the last Full Backup/Dump and can be used in between Full Backups and Tran backups to give greater flexibility, especially on large databases. It is similar to the Differential Backup in SQL Server

To enable this backup for a database you must first enable the “allow incremental dumps” option in the database: e.g sp_dboption <dbname>, ‘allow incremental dumps’, true

You can then issue the cumulative backup command as long as there is an existing valid Full Dump backup:
dump database <dbname> cumulative to “/tmp/tore1-cum.dmp”
go


To load the cumulative dump the syntax is:
load database <dbname> cumulative from “/tmp/tore1-cum.dmp”
go

Backup / Restore order

In terms of using the cumulative backup you could for example take a weekly Full backup, then daily cumulative backups and hourly transaction log backups.

So in a recovery situation you would first recover the Full Backup, then the latest Cumulative Backup and finally the transaction log dumps up to the required point in time.

Database Replication MSA with dump and load – Easy setup

Pre-Steps

The Repserver interfaces file will need entries for both Primary and Replicate Dataservers and the Primary Dataserver’s interface file will need an entry for the Repserver.

 

  • Lets first set some things set up and make a note of various user accounts and passwords.
    Make a note of the various logins passwords you need; Dataserver sa login, Dataserver maint login (which needs to be created on both dataservers), Replication server sa login and Replication server Repagent login. 
    Unless you have an account with the same name/password on all three servers (RepServer and two dataservers), e.g sa, you will also need to create a RepAdmin Login on the RepServer and two Dataservers giving it sa_role and replication_role on the Dataservers and sa role on the RepServer. This login MUST be used when defining the subscription later on. 
    If it already exists drop the maint user from the primary and instead alias the maint user to dbo, e.g sp_addalias Database_maint, dbo

  • Create the connections to the source(pubs2) and replicant(pubs2_rep) databases using rs_init, marking pubs2 as being replicated.

  • After the connections are created you need to disable the firing of any triggers on the replicate side with:
    Alter connection to <replicate server>.<replicate db>
    Set ‘dsi_keep_triggers’ to ‘off’
    go
  • Mark the whole Primary/Source database (e.g pubs2) for replication with
    Sp_reptostandby pubs2, ‘all’
    go
    You can also just mark individual table(s) for replication with;
    use pubs
    go
    sp_setreptable table_name, true;

  • If you are setting the whole database to replicate (sp_reptostandby) then you also need to enable warm standby transactions which enables/allows replication of DDL statements.
    sp_config_rep_agent pubs2,
    send_warm_standby_xacts,true
    go
  • Make sure the repagent is up and running as that is needed for the dump marker to trigger below. This can be checked in the Primary Dataserver with sp_help_rep_agent and also on the RepServer with admin who. Another thing to check is for a valid secondary truncation point in the primary database. You can check this with dbcc gettrunc and look for the secondary trunc state, it should be 1 for valid. If its not then stop the rep agent, then run dbcc settrunc (‘ltm’, ‘valid’), then in the RSSD run rs_zeroltm <primary server>, <primary db> and then back in the primary db start up the repagent and again check for the secondary truncation point.
  • Assuming the above is fine we now Create the Database replication definition using RepAdmin login:
    create database replication definition pubs2_rdf
    with primary at <Primary server>.<Primary DB>
    replicate ddl
    go
  • Define subscription to database rep def using RepAdmin login:
    define subscription pubs2_sub
    for database replication definition pubs2_rdf
    with primary at <Primary server>.<Primary DB>
    with replicate at <Replicate server>.<Replicate DB>
    use dump marker
    go

    One very important thing to note with create or define subscription is that the Login you use to run this, either interactively or through a script, must exist on the primary (and secondary) dataserver and the RepServer with the same name and password.
    You can check the creation of the subscription in the RepServer errorlog, there should be a line saying “Placing marker in <Primary server>.<Primary db> log

  • Dump the Primary Database
    Once the dump is complete you should check the RepServer errorlog again, as soon as the dump marker gets processed there should be an entry in the log like:
    “Distributor for ‘<Primary server>.<Primary DB>’ received and processed enable marker. Distributing commands to replicates.
    Distributor for ‘<Primary server>.<Primary DB>’ received and processed dump marker.
    The DSI thread for database ‘<Replicate server>.<Replicate DB>’ is shutdown.”

  • Now Load the dump into the Replicate DB and Online the database

  • You now add the maint user as an alias to dbo (or equivalent permissions) for the replicate database, unless it has exactly the same suid on both servers in which case it will already be aliased to dbo but check this.

  • Now remove the secondary truncation point from the replicate database with
    dbcc settrunc (‘ltm’, ‘ignore’)
  • Resume connection to the replicant database dsi
    Resume connection to <Replicate server>.<Replicate DB>


  • That should be it, you should now have database replication set up between pubs2 and pubs2_rep, which you can test by creating a new table and entering some data in pubs 2 and checking it gets created and sent across to pubs2_rep.



    BTW to allow replication of DDL statements you might need to enable ‘ddl in tran’ in the replicate database but I think this is already covered in the “Alter connection to <replicate server>.<replicate db>” Step but in case of any errors/issues just be aware of it.
    Use master
    go
    sp_dboption pubs2_rep, ‘ddl in tran’, true
    go

Keep-alive and chrome refresh scripts

Keep-alive is a VBScript file as follows:

Set WshShell = WScript.Createobject(“WScript.Shell”)
do while a < 30000
a = a + 1
WScript.Sleep 595000
WshShell.SendKeys “{F15}”
Loop

Chrome refresh is a powershell script wrapped around a bat file:

bat file;

cd H:
psa chrome_loop.ps1

Powershell script;

while(1) { # Loop forever
sleep -Seconds 899 # Wait 15 minutes
$wshell = New-Object -ComObject wscript.shell
if($wshell.AppActivate(‘Chrome’)) { # Switch to Chrome
Sleep 1 # Wait for Chrome to “activate”
$wshell.SendKeys(‘{F5}’) # Send F5 (Refresh)
} else { break; } # Chrome not open, exit the loop
}

In-Memory OLTP Guide

Table Of Contents
  1. Document Notation
  2. SQL Server In-Memory OLTP Overview
  3. In Memory Objects
  4. In Memory Planning
  5. In Memory Identification  
  6. Monitoring In Memory 
  7. Using Resource Pools With In Memory
  8. In Memory Disk Space Usage
  9. In Memory High Availability
  10. In Memory Object Maintenance
  11. Backup Restore And  Requirements For RAM
  12. Hash Indexes And Hash Buckets On In Memory Tables
  13. Configuration and Setup – Guidance For Stable Environments
  14. Administrating In Memory
  15. In Memory Troubleshooting

Document Notation

Abbreviations:

GUI = Graphical User Interface

AD = Active Directory

HA = High Availability

DR = Disaster Recovery

SSMS = SQL Server Management Studio

OOM = Out of Memory

Terminology:

In Memory = In-Memory OLTP

In Memory tables = Memory-optimized tables

In Memory table types = Memory-optimized table types

Primary Replica = A SQL instance on a node hosting an availability group where the databases are in read / write mode. This replica sends log data to the secondary replicas.

Secondary Replica = A SQL instance on a node hosting an availability group where the databases are in read only or unavailable mode. This replica receives log data from the primary replica.

SQL Server In-Memory OLTP Overview

SQL Server In-Memory OLTP is an embedded technology within SQL since SQL 2014. It consists of an engine referred to as ‘Hekaton’ that runs interactively with the traditional disk based query processing engine. Traditional SQL server operations that occur in data cache could also be described as “in memory” however In-Memory OLTP refers to operations performed by the Hekaton engine with specific memory optimized objects such as in memory tables. This combination can significantly improve the performance of transaction processing, data ingestion and data load, and transient data scenarios.

Note, whilst In-Memory OLTP is largely compatible with traditional SQL databases there are some limitations on feature support and interoperability. These are listed in the appendix.

In Memory Objects

There are 3 types of in memory objects. The following table describes each.

TypeDescription
Memory-optimized tablestables Used to store data in a similar way to traditional tables. There are two sub types
Durable – Data is permanent stored in memory and on disk and available after a server restart
Non Durable – Data is transient, stored only in memory and not available after a server restart
Memory-optimized table typesUsed to store transient data as an alternative to using traditional temporary tables or table variables.
Natively compiled T-SQL modulesT-SQL modules can be natively compiled: stored procedures, triggers, and scalar user-defined functions. Natively compiled modules are stored as .dll files that exist on the host file system

In Memory Planning

Use of in memory technology and it’s suitability for individual applications is out of scope for this guide. The following section is concerned with ensuring that when an application does use in memory the environment is adequately sized.

RAM AND DISK REQUIREMENTS

Durable and non durable objects RAM use for each database using in memory should be estimated as accurately as possible. 

Example

Database NameNon Durable (GB)Durable (GB)
IN_MEMDB1113
IN_MEMDB205
IN_MEMDB347

Minimum RAM required = (Total Non Durable + Total Durable) + 30% of (Total Non Durable + Total Durable) + Non in memory SQL requirements + 4GB for OS

e.g  5GB + 25GB + 9GB + 16GB + 4GB = 59GB Minimum RAM in server.

Minimum RAM required should be treated as a rough guideline unless the application estimates are very accurate. It is highly recommended to over spec RAM for in memory applications as out of memory conditions are best resolved by having more RAM available. On physical machines especially, there is a significant time delay in adding RAM. In the example if 48GB is calculated by a best guess it may be prudent to go for 64GB if unsure.

Note: if the server is co hosting with other applications “4GB for OS” should be increased to accommodate their requirements.

Minimum Disk Space Required

By default the E: drive will hold the in memory “on disk” related files. Ram requirements need to be known at this stage.

Example

TypeSize in GB
Total Durable RAM GB * 4100
Total size GB of on disk datafiles Mdf/ndf files on E:300
Min 25% overhead100

Minimum Disk Space Required for E: = (Total Durable RAM GB * 4) + Total size GB of on disk checkpoint file + Min 25% overhead

e.g. 100GB + 300GB + 100GB = 500GB

Note: As soon as an in memory object of any kind is created in a new database the in memory filegroup on disk gets pre-populated with checkpoint files. On <=8GB RAM instances this is approximately 135MB of disk space, on >8GB RAM it is approximately 2GB. 

DR / UAT / DEV SERVER REQUIREMENTS

In order to transfer a production database to another environment the target server should have enough RAM and Disk Space to accommodate it.

 DR Server must have identical RAM and Disk space to Production 

 DR and UAT servers should have enough RAM and disk space to accommodate all or a subset of the production databases. This decision is application specific. If the databases contain durable objects then disk space and RAM should be considered on the target, if only non durable objects are contained then only RAM needs to be considered.

 Conversely if a database needs to be transferred from non prod to prod the target should have at least the same disk and RAM specifications as the source.

 Any RAM/Disk upgrades to the production and DR should also be reflected in non prod where applicable.

In Memory Identification  

In Memory is seamlessly integrated into SQL server and can be administered using SSMS and TSQL commands. There are also numerous DMVs containing in memory object related information. It is useful however to be aware of additional components and differences inside and outside of SQL. 

IN MEMORY DATABASE FILEGROUP

To use any in memory objects the user database requires a memory optimized file group, this can be added to an existing database or specified at initial creation. 

An example create database statement is below. Note the yellow highlighted section.

CREATE DATABASE [inmemDB]

 CONTAINMENT = NONE

 ON  PRIMARY 

( NAME = N’Data’, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.mdf’ , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 102400KB ), 

 FILEGROUP [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = 500MB)

 LOG ON 

( NAME = N’Log’, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.ldf’ , SIZE = 102400KB , MAXSIZE = 2048GB , FILEGROWTH = 102400KB )

This special filegroup is logically associated with one or more “containers”. A container is a folder/directory on the host file system, the example above has one container and it is located at E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′. 

Containers for filegroups should be located in subdirectories off the E: drive at this location

E:\MSSQL13.<instancename>\MSSQL\DATA

e.g. for a default instance 

E:\MSSQL13.MSSQLSERVER\MSSQL\DATA

Inside the container folder(s) are subdirectories containing in memory object related files such as .dll files and checkpoint files (discussed later).

Key points are

* In memory objects are stored on the filesystem and use disk space

* They require the same access rights as .mdf/.ldf files

* They should not be deleted/modified outside of SQL 

* Durable in memory objects can use up a lot of disk space and the MAXSIZE parameter must be carefully considered

* Only one in memory filegroup is allowed per user database although it may have multiple containers

SERVER PROPERTY ISXTPSUPPORTED

To verify a SQL instance can support the in memory feature run the following t-sql command.

select SERVERPROPERTY(‘IsXTPSupported’)

This should return 1 for the DB Engineered versions of SQL as it is Enterprise Edition. If it returns 0 then in memory cannot be used. 

SP_BLITZINMEMORYOLTP

In memory objects can be difficult to identify quickly. In SSMS in memory tables and natively compiled stored procedures appear the same as non in memory equivalents. This can hide the presence of in memory objects and cause confusion. To get a snapshot of all in memory objects run dba.dbo.sp_BlitzInMemoryOLTP. This provides detailed information including 

* What databases contain in memory objects

* Current RAM and on disk space usage of in memory objects

* Names of in memory objects and their types (table / module / table type)

* Locations of filegroups, containers and details of checkpoint file status

* Hash index information

With the output from sp_BlitzInMemoryOLTP it is possible to see what is “in memory” and take action appropriate to the technology.

The main contributors are Ned Otter, Konstantin Taranov, Aleksey Nagorskiy who are leading experts in the subject of in memory for SQL. This should be installed as part of additional build configuration detailed in the section Configuration And Setup.

Monitoring In Memory 

In memory objects use RAM and often some file system space. How much depends on what type of object and whether data is durable or non- durable. 

The following table describes the resource usage of each type

In Memory Object TypeRam UsageDisk UsageNotes
Durable
Table
Yes, amount depends on application requirementsYes, this varies but an approximate guide is to allow x 4 of the RAM usageIf an in memory durable table requires 1GB of RAM it will need approximately 4Gb of disk space By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA e.g. for a default instance  E:\MSSQL13.MSSQLSERVER\MSSQL\DATA
Non-Durable
Table
Yes, amount depends on application requirementsYesData is available until the instance is restarted across all sessions. By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA Amount required varies sizes can go up to 5GB see section Overview for details.
Table TypeYes, amount depends on application requirementsYesData is specific to the session / spid that created it within the batch only. Like traditional table variables it is removed when the batch is completed so RAM usage is highly transient. By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA Amount required varies sizes can go up to 5GB see section Overview for details.
Natively
compiled
module
Yes, a very small amount < 100KBYes, a very small amount for the .dll and associated files < 1024KB By default the on disk components are stored under subdirectories here E:\MSSQL13.<instancename>\MSSQL\DATA\xtp e.g. for a default instance  E:\MSSQL13.MSSQLSERVER\MSSQL\DATA\xtp

Key point to note from the table above is that durable tables need the most management, both RAM and potentially large amounts of disk space are required. They also will grow until data is explicitly deleted by the application in the same way an on disk table would. Non durable tables also need disk space.

VIEWING IN MEMORY OBJECT USAGE VIA SSMS

SQL Server Management Studio (SSMS) version 16 and above contains a high level report detailing durable and non-durable memory-optimized tables. It is accessible via

Right Click on database name – Reports – Memory Usage By Memory Optimized Objects

This report does NOT include active Memory-optimized table types RAM usage or give details of on disk space usage but can be useful in getting a quick overview of how much RAM is in use by a single database. 

VIEWING IN MEMORY OBJECTS VIA DMV’S

There are numerous DMV’s available to monitor in memory objects. Many of these are referenced in the stored procedure sp_BlitzInMemoryOLTP described here .  Examination of this stored procedure can help provide the basis of custom scripts and adhoc queries.  Other useful queries are detailed in section Administrating In Memory . An example query that details memory usage for current active Memory-optimized table types RAM is below.

SELECT type,name,memory_node_id, pages_kb/1024 AS pages_MB   

FROM sys.dm_os_memory_clerks WHERE type LIKE ‘%xtp%’

Using Resource Pools With In Memory

Resource pools are logical containers used to define a specific percentage of resource (RAM) space available to database in memory objects. Only databases bound to the resource pool may use this space. This prevents in memory objects from consuming all the memory available , stops other processes from consuming space reserved for in memory objects and allows monitoring to alert when a specific database bound to a pool is getting close to its available allocated memory.

Note: The actual size of available space inside a resource pool can change due to the fluctuations in availability of total free RAM within the instance. This occurs less on instances with larger amounts of RAM. 

An instance may contain multiple resource pools but a database can only be bound to one at a time. The bank design standards require each user database containing in memory objects go in its own pool. This allows finer grain control and monitoring.

Resource Pools And In Memory Table Allocation

Creating resource pools, adding databases to resource pools and other administrative tasks are documented in section Resource Pool Viewing , Creation And Modification.

Inside a resource pool only a certain percentage is able to be used by Memory-optimized tables this percentage depends on how much RAM is available to the instance. This figure is shown in the committed_target_kb column of sys.dm_os_sys_info. The following table shows the percentages based on the RAM available.

committed_target_kb (GB)Percentage allocated for in memory tables
<= 8GB70%
<= 16GB75%
<= 32GB80%
<=96GB85%
>9690%

Source:

https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/bind-a-database-with-memory-optimized-tables-to-a-resource-pool?view=sql-server-ver15

For example if a 10GB resource pool is allocated for an instance with <=8GB available only 7GB will be available for Memory-optimized tables, therefore when creating resource pools the size should be allocated based on this percentage.

NOTE: Non in memory tables and other objects are not affected by RAM restrictions in resource pools.

Binding Databases To Resource Pools

After a resource pool is created the database containing in memory objects needs to be added to it. This is referred to as binding. The binding only becomes effective when the database is taken offline and put online. An alternative technique is to bind the database and perform an AlwaysOn failover and failback. This is described in this section Binding A Database To A Resource Pool .

In Memory Disk Space Usage

As mentioned previously in memory tables with durable data require disk space. As an approximate guide it requires 4 times the amount of disk space that is stored in RAM, E.g. 8GB of RAM usage needs 32GB of disk space. In certain circumstances in memory durable tables can consume considerably more than expected and if there is insufficient disk space the databases can go into a suspect state. Knowledge of how durable objects use space, correct planning and suitable maintenance is required to maintain stability.

Transactions And The In Memory Checkpoint Process

All in memory transactions with durable data go through the normal database transaction log and use up space within it until they are flushed to disk by the in memory checkpoint process. The in memory checkpoint process is a second checkpoint process run by the hekaton engine running on it’s own threads. It (the checkpoint process) is designed with high performance in mind and will only automatically trigger when in memory transactions in the log has produced 1.5 GB* since the last checkpoint. This can cause slower recovery times for in memory databases as potentially there are more uncommitted transactions.

The checkpoint process has a secondary role to perform a file merge of the data/delta files on disk. This is a separate process that combine data/delta files into a smaller number of files to reduce disk space and speed up query performance. The merge process happens asynchronously and disk space used for data/delta files can be observed to grow and shrink in between bursts of transactions as it catches up. Sustained transactions can cause disk space used considerably and the amount of disk space required should be based on the size of the data/delta files before they are merged.

This guide is primarily concerned with stability however it is worth noting that for in memory durable tables to perform transactions, additional IOPS capacity is required to allow for the file merge process overhead. Microsoft recommends x3 IOPS capacity than that required for disk based tables.

Important !!! Databases containing in memory objects should have a log size of at least 5GB

*12GB when large checkpoint mode is enabled see section Large Checkpoint Mode

Controlling Disk Space Use

The amount of disk space used by in memory objects is governed by how much data is stored and the transnational characteristics of the application. This is made more complicated than normal because of the relationship of the size of the table in RAM and the size needed on disk. It is primarily the job of the application development team to ascertain how much space is required and to run the application on hardware with enough resources.  The application team will also have to ensure that in memory table sizes are controlled through suitable data purging and that application monitoring is in place. Whilst filegroups can be set to a certain size to prevent filling the disk up the database will stop working and go into a suspect state. It can often be resumed using techniques detailed in  however stability will be affected. Thorough testing, correct planning and on going monitoring is the best way to control disk space.

In Memory High Availability

Transaction replication using SQL AlwaysOn and log shipping can be used with in memory objects. Database mirroring does NOT work with in memory objects.

Whilst memory-optimized tables are supported only the data from tables with durable data is replicated. If a failover occurs memory-optimized tables defined as schema only need to be manually repopulated on the new primary node. Secondary replicas defined as read only can only be used to provide read only access to durable memory-optimized tables.

In Memory Object Maintenance

Whilst in memory objects appear to be very similar to the objects on disks many of the traditional scheduled maintenance jobs are not required or are implemented in a different way.

Maintenance for traditional disk based tables consists of three main activities

  1. Consistency checks using DBCC commands
  2. Reindexing and reorganisation of indexes
  3. Statistics updates

The following table describes maintenance required for in memory objects and how they compare with traditional activities

TraditionalIn Memory EquivalentRequired ifNotes
Consistency checksPerform a full backup or restoreMemory-optimized tables with durable data existsA checksum is calculated for every block written to data/delta files, those checksums are recalculated any time the block is read such as a full backup. If the backup completes successfully it indicates that there is no corruption. . Backup failures should be investigated thoroughly. Corruption can also be detected when restoring a backup file.
Reindexing and reorganisation of indexesNoneN/AIn memory indexes cannot be reindexed or reorganised. A table with an inefficient hash index can be identified and altered to optimise or replace with a more suitable index discussed in section Hash Indexes And Hash Buckets On In Memory Tables .This is not in the scope of scheduled maintenance.   
Statistics updatesStatistics updates with recompileMemory-optimized tables existDatabases at compatibility level 13 or higher with in memory tables can be set with Auto Update Statistics option set to on. Additionally the standard update statistics job should be run. This has been modified to also recompile any Natively compiled T-SQL modules found as they will not automatically recompile as traditional stored procedures do. See for further details.

Use Of DBCC Commands

DBCC commands with REPAIR options do NOT work on in memory tables. If a table is corrupted it must be recreated and repopulated or the data restored from a last known good backup.

DBCC CHECKTABLE is not compatible with in memory tables

An attempt to run it will result in an error message similar to below

Object ID 1957582012 (object ‘badhashInMem’): The operation is not supported with memory optimized tables. This object has been skipped and will not be processed.

Backup Restore And  Requirements For RAM

The standard T-SQL BACKUP and RESTORE commands are fully compatible with databases containing in memory objects.

The following MUST be noted for restore operations. 

Unlike disk-based tables, memory-optimized tables with durable data must be loaded into memory before the database is available for user access. This requirement adds a new step in the database recovery. If the server does not have enough available memory, database recovery fails and the database is marked as suspect. Therefore when restoring databases containing memory-optimized tables with durable data to different environments the target must have at least as much available RAM as the source.

This restriction only applies for databases containing memory-optimized tables with durable data. Other in memory object types and non durable memory-optimized tables are NOT affected.

Hash Indexes And Hash Buckets On In Memory Tables

There is no maintenance required for indexes on in memory tables, however knowledge of one type of index known as a HASH index is useful to indicate potential performance problems and in extreme cases prevent out of memory (OOM) issues. Use of HASH indexes and their design is primarily of interest to the developer but as memory allocation is involved and there is only a finite amount DBA’s should be aware of how they work. Note, HASH indexes can exist on memory-optimized tables with durable and non durable data and memory-optimized table types.

HASH buckets are specified when a HASH index is created. This type of index is extremely efficient for queries using ‘=’ . Specifying the correct number of buckets needs knowledge of the data uniqueness and amount of rows in the table. An example create statement is below.

CREATE TABLE [dbo].[badhashInMem]

(

       [PK] [int] IDENTITY(1,1) NOT NULL,

       [col1] [varchar](1000) COLLATE Latin1_General_CI_AS NOT NULL,

 PRIMARY KEY NONCLUSTERED HASH

(

       [PK]

)WITH ( BUCKET_COUNT = 16)

)WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_AND_DATA )

HASH buckets can be thought of as pigeon hole storage. If you had 26 pigeon holes marked A-Z you could sort mail by the first letter of the surname. Mr Cooper would go in hole marked ‘C’ , Bob Brown would go in hole marked ‘B’ and so on.  If you had 1000 mail items people all with a surname beginning with ‘B’ the pigeon hole ‘B’ would contain 1000 mail items and although you could locate pigeon hole ‘B’ quickly you would then spend a lot of time going through the contents. This is analogous to hash index row chaining.

At the other extreme if you had only 26 mail items to surnames spanning exactly A-Z you would have only one mail item in each pigeon hole. This would be the perfect distribution for a hash index with bucket count of 26.

It is rare that there will always be optimal key value to bucket ratio. This guidance from Books On Line suggests that within certain boundaries over allocation of buckets (empty buckets) or row chaining is acceptable.

Empty buckets:

                    33% of total buckets is a good target value, but a larger percentage (even 90%) is usually fine.

                    When the bucket count equals the number of distinct key values, approximately 33% of the buckets are empty.

                    A value below 10% is too low.

Chains within buckets:

                    An average chain length of 1 is ideal in case there are no duplicate index key values. Chain lengths up to 10 are usually acceptable.

                    If the average chain length is greater than 10, and the empty bucket percent is greater than 10%,

                    the data has so many duplicates that a hash index might not be the most appropriate type.

Database Administration stability perspective

Key point: The number of buckets does not restrict the size of the index or how many records it can contain. You can have a bucket count of 1024 and 1 million records or a million buckets and 1024 records. Bucket numbers are there for optimisation and performance.

From a stability perspective ONLY, each bucket take up 8 bytes of RAM. In an extreme situation if buckets are hugely over allocated this could cause an out of memory issue even if there were no rows in the underlying table. An under allocation of buckets may affect performance but would not take up so much memory.

To view HASH index state the following query can be run

— Run in the database containing in mem tables with hash indexes

select t.name AS [TableName],i.name as [IndexName],FLOOR((CAST(h.empty_bucket_count AS FLOAT) / h.total_bucket_count) * 100) as [free_bucket_pct],

IIF(FLOOR((CAST(h.empty_bucket_count AS FLOAT) / h.total_bucket_count) * 100) < 33, ‘Free buckets % is low! Increase buckets to prevent bucket sharing’, ‘OK’) AS [Free buckets status],

h.avg_chain_length,

IIF(h.avg_chain_length > 10 AND FLOOR((CAST(h.empty_bucket_count AS FLOAT) / h.total_bucket_count) * 100) > 10, ‘avg_chain_length has many collisions! Index Key columns are highly duplicated not suitable for HASH index ‘, ‘OK’) AS [avg_chain_length status]

from sys.dm_db_xtp_hash_index_stats h

inner join

sys.indexes i ON h.object_id = i.object_id AND h.index_id = i.index_id

inner join

sys.tables t on h.object_id = t.object_id

To alter the amount of buckets on an index an example query is below

ALTER TABLE dbo.inmemory_tab1

ALTER INDEX inmemory_tab1_primaryKey

REBUILD WITH(BUCKET_COUNT = 123030)

GO

Configuration and Setup – Guidance For Stable Environments

It is crucial that there is sufficient RAM available to allow in memory to function. The following table provides guidance based on the percentage of RAM used by in memory objects.

Percentage of RAM required for in memory objectsAction
Between 0 and 40%Proceed with setting up instance
Between 40 and 70%Discuss with the application team the likely growth of the in memory objects when live.  If you are confident that data levels will not increase beyond 70% proceed. If required arrange a meeting with database engineering and the application team before proceeding.
Greater than 70%Arrange a meeting with the application team and database engineering to discuss further. Do not proceed until signed off by stakeholders.

Administrating In Memory

In Memory Filegroup And Containers

The following section provides examples of how to create and modify an in memory filegroup and it’s contianers. Note. Once a filegroup is added it cannot be removed without dropping the database.

Create A New Database With An In Memory Filegroup And One Container

CREATE DATABASE [inmemDB]

 CONTAINMENT = NONE

 ON  PRIMARY

( NAME = N’Data’, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.mdf’ , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 102400KB ),

 FILEGROUP [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT — memory optimized filegroup

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED) –container

 LOG ON

( NAME = N’Log’, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.ldf’ , SIZE = 102400KB , MAXSIZE = 2048GB , FILEGROWTH = 102400KB )

Note: The container line does not have a FILEGROWTH parameter as it is a directory. The MAXSIZE parameter can be set to a fixed figure e.g. 10GB, in this example it is UNLIMITED

Create A New Database With An In Memory Filegroup And Two Containers

CREATE DATABASE [inmemDB]

 CONTAINMENT = NONE

 ON  PRIMARY

( NAME = N’Data’, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.mdf’ , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 102400KB ),

 FILEGROUP [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA  DEFAULT — memory optimized filegroup

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED), –container 1

( NAME = N’InMemDB_inmem2′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem2′ , MAXSIZE = UNLIMITED) –container 2

 LOG ON

( NAME = N’Log’, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB.ldf’ , SIZE = 102400KB , MAXSIZE = 2048GB , FILEGROWTH = 102400KB )

Add An In Memory Filegroup With One Container To An Existing Database

ALTER DATABASE [inmemDB] ADD FILEGROUP  [inmemDB_inmem] CONTAINS MEMORY_OPTIMIZED_DATA — add filegroup

GO

ALTER DATABASE [inmemDB] ADD FILE

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED)

TO FILEGROUP  [inmemDB_inmem] — add container to filegroup

GO

Add Another Container To An Existing Database

Useful if an emergency container needs to be added on another volume because of space shortages on primary volume.  Note, if database is in an AG the path specified must exist on all the replicas.

ALTER DATABASE [inmemDB] ADD FILE

( NAME = N’InMemDB_inmem2′, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem2′ , MAXSIZE = UNLIMITED)

TO FILEGROUP  [inmemDB_inmem] — add container to filegroup

GO

Remove A Container From An In Memory Filegroup

Note: Once in memory objects have been added to a database the containers are initialised. At this point there must be at least one container. The last one can never be removed. All other containers can only be removed if there is enough space in the existing containers to do so.

ALTER DATABASE [inmemDB]  REMOVE FILE InMemDB_inmem2

Change The Maxsize Parameter On An Existing Container

Useful if a fixed size has been set and more space is required

ALTER DATABASE [inmemDB] MODIFY FILE

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED) –was 500MB

View How Much File Space Is In Use By Memory Optimized Filegroups

There are several methods to do this suggestion are below.

Run sp_BlitzInMemoryOLTP:

This has several parameters two of the main options are below.

.EXAMPLE

    EXEC sp_BlitzInMemoryOLTP;

    — Get all In-memory information

.EXAMPLE

    EXEC sp_BlitzInMemoryOLTP @dbName = N’inmemDB’;

    — Get In-memory information for database with name inmemDB

This displays a lot of in memory information including space in use by containers

Run a SQL Query:

The example queries below are for a database called inmemDB, substitute this name for the relevant database.

–this query shows the in memory filegroups

SELECT * FROM inmemDB.SYS.FILEGROUPS

— this query gives the size of the files – for normal files – and the directory size for the in memory folder/container

— note its listed as type filestream which it’s not but uses a similar method of storing data

SELECT (SIZE*8)/1024 [mb used],* FROM inmemDB.SYS.DATABASE_FILES

Look on the file system:

Use explorer / DOS / Powershell  to see how much space is used by the containers.

Resource Pool Viewing , Creation And Modification

The following section provides instructions and  examples of how to create and modify resource pools and add and remove databases to and from them.

Creating A Resource Pool

An example command to create and bind a resource pool is below.

CREATE RESOURCE POOL inmemDB_IM_RP WITH ( MIN_MEMORY_PERCENT = 6  ,MAX_MEMORY_PERCENT = 6  );

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

sp_xtp_bind_db_resource_pool [inmemDB],[inmemDB_IM_RP]

GO

Resource pools are created using percentages of memory available to the instance. Additionally, as mentioned in this section Resource Pools And In Memory Table Allocation an overhead is required to ensure there is enough space for in memory tables.

Once you know how much memory to allocated load this script into SSMS and alter these two parameters

–enter GB figure of in memory requirements

declare @memoryNeeded NUMERIC(8,2) = 0.5

–enter Database Name

declare @db_name sysname = ‘inmemDB’

Change @memoryNeeded to the required value in GB and @db_name to the name of the database with the in memory objects.

Run the script and go to the messages tab. This contains the create resource pool statement with the required percentages.

Important !!! Create the resource pool identically on all the AlwaysOn replicas to ensure there is adequate space if the database fails over.

Modifying A Resource Pool

An example command to modify a resource pool is below.

ALTER RESOURCE POOL inmemDB_IM_RP WITH ( MIN_MEMORY_PERCENT = 14 ,MAX_MEMORY_PERCENT = 14 );

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

If in memory objects require more space the resource pool containing the database needs to be resized. Like the create resource pool sizing is specified in percentage. This requires converting the size in GB to a percentage and as mentioned in this section Resource Pools And In Memory Table Allocation an overhead is required to ensure there is enough space for in memory tables.

Once you know how much memory to allocate load this script into SSMS and alter these two parameters

–enter GB figure of in memory requirements

declare @memoryNeeded NUMERIC(8,2) = 1.5

–enter Database Name

declare @db_name sysname = ‘inmemDB’

Change @memoryNeeded to the required value in GB and @db_name to the name of the database with the in memory objects.

Run the script and go to the messages tab. This contains the alter resource pool statement with the required percentages.

Important !!! Resize the pool identically on all the AlwaysOn replicas to ensure there is adequate space if the database fails over.

Note: The database does NOT need to be rebound following a resize of the resource pool.

Viewing Resource Pool Usage

Pool size and usage can be viewed via T-SQL or a custom report from SSMS or windows performance monitor.

Note: In memory space usage will NOT show up until a database is fully bound to a resource pool.  Similarly free space in a resource is not reflected until a databases is removed AND it is set offline and online.

Note: If resource governor is disabled pool max sizes are IGNORED and all pools will have full access to all the available target memory.

Check that resource governor is enabled by running this query

select * from sys.resource_governor_configuration

A “0” indicates it is DISABLED, to enable issue this command

ALTER RESOURCE GOVERNOR RECONFIGURE;

Via TSQL

The following T-SQL script shows pool information and whether or not there is a database bound to the pool

SELECT p.name as ‘Pool Name’,

CASE WHEN d.name IS NULL THEN ‘No Database Bound !’ ELSE d.name END as Database_Name,

p.used_memory_kb/1024 as ‘Space Used by In Memory Objects MB’,

(p.target_memory_kb)/1024 as ‘Pool_Size_MB’,

(p.target_memory_kb-p.used_memory_kb)/1024 as ‘Pool_Space_Available_MB’,

CAST ( (CONVERT(DECIMAL,p.used_memory_kb)/(CONVERT(DECIMAL,p.target_memory_kb))*100) as decimal(18,2)) as ‘Percent_Used’

FROM sys.databases d

right outer join sys.dm_resource_governor_resource_pools p

on p.pool_id = d.resource_pool_id

where p.name not in (‘internal’,’default’)

VIA Performance Monitor

Two main performance monitor counters are used

SQLServer:Resource Pool Stats(<Resource Pool Name>)\Used memory (KB)

SQLServer:Resource Pool Stats(<Resource Pool Name >)\Target memory (KB)

These can be added to a data collector set to monitor pool usage over time.

Binding A Database To A Resource Pool

A database needs to be bound to a resource pool, an example of how to do this is below.

sp_xtp_bind_db_resource_pool [OOM_DB],[OOM_DB_IM_RP]

Output:

A binding has been created. Take database ‘OOM_DB’ offline and then bring it back online to begin using resource pool ‘OOM_DB_IM_RP’

An example statement to complete the binding is below

ALTER DATABASE OOM_DB SET OFFLINE

ALTER DATABASE OOM_DB SET ONLINE

GO

Note: If the database is participating in an AlwaysOn Availability group this message is generated when an attempt is made to “SET OFFLINE”

Msg 1468, Level 16, State 1, Line 21

The operation cannot be performed on database “OOM_DB” because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.

Msg 5069, Level 16, State 1, Line 21

ALTER DATABASE statement failed.

In this case failover and failback the resource group containing the database to complete the binding.

Important !!! Remember to bind the database to the same resource group on all the AlwaysOn replicas or you will have databases not bound to pools on failover.

Unbinding A Database From A Resource Pool

An example of how to do unbind a database from a resource pool is below.

sp_xtp_unbind_db_resource_pool [OOM_DB]

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

Although the database is now unbound the resource pool will still show resources in use. To update this take the database OFFLINE and ONLINE.

An example statement to reset the resource pool usage is below

ALTER DATABASE OOM_DB SET OFFLINE

ALTER DATABASE OOM_DB SET ONLINE

GO

Note: If the database is participating in an AlwaysOn Availability group this message is generated when an attempt is made to “SET OFFLINE”

Msg 1468, Level 16, State 1, Line 21

The operation cannot be performed on database “OOM_DB” because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.

Msg 5069, Level 16, State 1, Line 21

ALTER DATABASE statement failed.

In this case failover and failback the resource group containing the database to complete the update to the resource pool usage.

Important !!! Remember to unbind the database from the same resource group on all the AlwaysOn replicas or there will be databases bound to incorrect pools on failover.

Removing A Resource Pool

Removing a resource pools returns its memory reservations to the default pool.

Before removing a resource pool the databases bound to it must be unbound, see above.

An example of how to remove a resource pool is below.

USE [master]

GO

DROP RESOURCE POOL [OOM_DB_IM_RP]

GO

USE [master]

GO

ALTER RESOURCE GOVERNOR RECONFIGURE;

GO

Important !!! Remember to remove unwanted resource pools from all the AlwaysOn replicas so all resources are available on all replicas.

In Memory Databases

Identify in memory databases, bound pools and space usage

The query below can be used to identify what in memory databases exist, what pool they are bound to and details of the size and used space. In addition it also provides remarks on correct pool name assignment.

SELECT d.name [in_memory_database],

CASE WHEN p.name IS NULL THEN ‘Not in Resource Pool !’ ELSE p.name END [Resource Pool Name],

p.used_memory_kb/1024 [Pool Used MB],p.target_memory_kb/1024 [Pool Size MB],

pc.cntr_value/1024 [Database IM Size MB],

CASE WHEN p.name <> RTRIM(pc.instance_name)+’_IM_RP’ THEN ‘Database bound to incorrect pool name’ ELSE

       CASE WHEN p.name IS NULL THEN ‘Database NOT bound to any resource pool’ ELSE ‘OK’ END

END [Notes]

from sys.databases d

inner join sys.dm_os_performance_counters pc

on d.name = pc.instance_name

left outer join sys.dm_resource_governor_resource_pools p

on d.resource_pool_id = p.pool_id

where pc.counter_name = ‘XTP Memory Used (KB)’

and pc.instance_name <> ‘_Total’ and pc.cntr_value > 0

View Total Server RAM and Pool Use

The following query can be used to determine how much space is allocated to resource pools for in memory objects and available RAM for other SQL processes.

select si.total_space_for_sql/1024 [Total_Instance_RAM_MB] ,rp.total_space_for_pools/1024 [Pool_Reservation_MB],

(si.total_space_for_sql-rp.total_space_for_pools)/1024 [Space_Left_For_SQL_MB],

CAST ( (CONVERT(DECIMAL,rp.total_space_for_pools)/(CONVERT(DECIMAL,si.total_space_for_sql))*100) as decimal(18,2)) as ‘Percent_Used_By_Resource_Pools’

from

(

select sum(target_memory_kb) [total_space_for_pools],1 [link] from sys.dm_resource_governor_resource_pools where name not in (‘internal’,’default’)

) as rp

inner join

(

select committed_target_kb [total_space_for_sql],1 [link] from sys.dm_os_sys_info

) as si

on rp.link = si.link

In Memory Troubleshooting

Overview

In memory troubleshooting from a stability perspective is concerned with two types of issues.

– In memory databases running out of RAM

– In memory databases running out of disk space

RAM

Memory-optimized tables with durable data require special attention as they persist data. After an instance restart an attempt will be made to reload all data back into RAM as the database starts up. Therefore if the database stopped before the restart because of RAM full it will again be full up and not come online unless more memory is made available by adding physical RAM or stopping other processes to increase available memory.

Memory-optimized tables with non durable data and Memory-optimized tables types (similar to traditional temporary tables) do release RAM on restart.

Resource pools allow some degree of control as they can be resized without requiring instance outage, this however only provides temporary remediation as they are limited to the available RAM in the instance. Frequent upward resizing of resource pools should trigger a discussion with the application team to either reduce the data size of in memory objects or plan to upgrade the server.

Disk

Memory-optimized tables both durable / non durable and Memory-optimized table types do use disk space. Non durable tables and table types theoretically shouldn’t but in testing with databases containing only table types around 2GB per database was allocated. If the in memory database contains only non durable tables or table types it is recommended to set the filegroup containers to 20GB. Similar to resource pools this will allow early warning via the monitoring and allow for resizing to remediate. For databases containing durable data it is recommended to set the filegroup containers to unlimited as they can grow rapidly. Monitoring is configured to alert when available disk space is getting low. In memory containers can be added to other drives in emergency however long term the server will need additional space allocated or the application team need to reduce the size of in memory data.

Error 41822 – There is insufficient disk space to generate checkpoint files

Full message:

There is insufficient disk space to generate checkpoint files and as a result the database is in delete-only mode. In this mode, only deletes and drops are allowed.

There are two reasons for this error.

  1. Filegroup container maxsize value has been reached
  2. The volume(s) the filegroup containers reside on are out of disk space

Remediation for reason 1

Opton 1 – delete some data

If the database is still available data can be deleted, the application team must do this.

After the records have been deleted the dba should issue the following commands to take the database out of delete only mode. Swap the database name “inmemDB” to the name of the database with the problem.

CHECKPOINT

backup log inmemDB to disk = ‘nul:’

Option 2 –  Expand the filegroup or add another container.

To attempt to fix, filegroup and container information is required. If the database is in a suspect / unavailable state this information can be obtained as follows.

–if the database is suspect the filegroup name and the current container names and paths will be unavailable

–container names and paths are available via this query

select db_name(database_id) [DB Name],name,

physical_name,type_desc from master.sys.master_files where type_desc = ‘FILESTREAM’

Note the highlighted values, if max_size value is -1 this indicates unlimited growth. In testing the max_size figure is not an accurate value.

To obtain the filegroup name an example has been provided below

–the filegroup name can be identified by the following methods

–restore the filelist from an offline backup file

restore filelistonly from disk = ‘c:\temp\inmemDB.dmp’

Alternatively go to a replica if available, run this in the database

select * from sys.filegroups

If filegroup maxsize has been reached set the maxsize to a larger value or to unlimited. The following example sets the container to unlimited

ALTER DATABASE [inmemDB] MODIFY FILE

( NAME = N’InMemDB_inmem1′, FILENAME = N’E:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem1′ , MAXSIZE = UNLIMITED)

If the volume has run out of space another container can be created on a volume with space, note this volume must exist on other replicas if the database is in an availability group. The following example adds sets the container to unlimited

ALTER DATABASE [inmemDB] ADD FILE

( NAME = N’InMemDB_inmem2′, FILENAME = N’F:\MSSQL13.MSSQLSERVER\MSSQL\Data\inmemDB_inmem2′ , MAXSIZE = UNLIMITED)

TO FILEGROUP  [inmemDB_inmem] — add container to filegroup

GO

Next step is based on the database mode.

If database is in suspect mode

After remediation if the databases are suspect and NOT in an availability group set the databases OFFLINE and ONLINE. Example below.

–if the database is in suspect mode attempt to bring on line with these commands

ALTER DATABASE inmemDB SET OFFLINE;

ALTER DATABASE inmemDB SET ONLINE;

If the databases are in an availability group perform an availability group failover.

If database is in delete-only mode

Issue the following commands to take the database out of delete only mode. Swap the database name “inmemDB” to the name of the database with the problem.

CHECKPOINT

backup log inmemDB to disk = ‘nul:’

Error 41805 – There is insufficient memory in the resource pool

Full message:

There is insufficient memory in the resource pool ‘%ls’ to run this operation on memory-optimized tables. See ‘http://go.microsoft.com/fwlink/?LinkID=614951‘ for more information.

The SQL errorlog may also contain messages similar to this

Disallowing page allocations for database ‘ inmemDB ‘ due to insufficient memory in the resource pool ‘inmemDB_IM_RP’. See ‘http://go.microsoft.com/fwlink/?LinkId=510837‘ for more information

This error occurs when in memory databases bound to a resource pool use all the allocated RAM. It can also occur when the resource governor is disabled.

Remediation

Check that resource governor is enabled by running this query

select * from sys.resource_governor_configuration

A “0” indicates it is DISABLED, to enable issue this command

ALTER RESOURCE GOVERNOR RECONFIGURE;

Identify the size of the pool containing the database see section Identify in memory databases, bound pools and space usage for details.

Modify the resource pool see section Modifying A Resource Pool for details.

After remediation if the databases are suspect and NOT in an availability group set the databases OFFLINE and ONLINE. Example below.

–if the database is in suspect mode attempt to bring on line with these commands

ALTER DATABASE inmemDB SET OFFLINE;

ALTER DATABASE inmemDB SET ONLINE;

If the databases are in an availability group perform an availability group failover.

Checkpoint Files Filling Up Disk Space Log Not Shrinking

On disk checkpoint files will grow as in memory objects in RAM grow. This is expected behavior. In certain rare circumstances checkpoint files will grow and the database transaction log will not truncate even when no additional in memory data is added. This is due to the requirement for transaction log backups to occur to allow the in memory checkpoint process.

Remediation

Take a full backup of the database. Backup the transaction log. Manually issue a CHECKPOINT command.

This condition is described fully in this link.

https://techcommunity.microsoft.com/t5/datacat/checkpoint-process-for-memory-optimized-tables-in-sql-2016-and/ba-p/305289

Re Sync SQL Server Logins with Users

–First, make sure that this is the problem. This will lists the orphaned users:

EXEC sp_change_users_login ‘Report’

–If you already have a login id and password for this user, fix it by doing:

EXEC sp_change_users_login ‘Auto_Fix’, ‘user’

–If you want to create a new login id and password for this user, fix it by doing:

EXEC sp_change_users_login ‘Auto_Fix’, ‘user’, ‘login’, ‘password’

To fix all orphaned users run:

>
USE —– change db name for which you want to fix orphan users issue
GO

declare @name varchar(150)
DECLARE cur CURSOR FOR
select name from master..syslogins
Open cur
FETCH NEXT FROM cur into @name
WHILE @@FETCH_STATUS = 0
BEGIN

EXEC sp_change_users_login 'AUTO_FIX', @name

FETCH NEXT FROM cur into @name

END

CLOSE cur
DEALLOCATE cur

>

Sybase – Assign Read to all objects in database

extract script (extract.sql):
select ‘grant select on ‘ + name + ‘ to READ-ROLE_RO’ from sysobjects where type in ( ‘U’, ‘V’)
go

Command to create set_perm.sql script:
isql -Udbaxx_operxx -SSYBSERV11 -P* -DDB1 -iextract.sql -oset_perm.sql

–Group creation script READ-ROLE_RO.sql which adds the group to relevant users
sp_addgroup READ-ROLE_RO
sp_changegroup READ-ROLE_RO, bichdanxx
sp_changegroup READ-ROLE_RO, rattparxx
sp_changegroup READ-ROLE_RO, ramasatxx
sp_changegroup READ-ROLE_RO, ramedilxx
sp_changegroup READ-ROLE_RO, singchadxx
sp_changegroup READ-ROLE_RO, rathharxx

isql -Udbaxx_operxx -SSYBSERV11 -P -iREAD-ROLE_RO.sql -DDB1

Command to execute set_perm script to set read permissions for all tables and views to that group:
isql -Udbaxx_operxx -SSYBSERV11 -P* -iset_perm.sql