Database status values in sysdatabases

Status control bits in the sysdatabases table


Decimal                       Hex                             Status

4                                  0x04                            select into/bulkcopy; can be set by user

8                                  0x08                            trunc log on chkpt; can be set by user

16                                0x10                            no chkpt on recovery; can be set by user

32                                0x20                            Database created with for load option, or crashed while loading database, instructs recovery not to proceed

256                              0x100                          Database suspect; not recovered; cannot be opened or used; can be dropped only with dbcc dbrepair

512                              0x200                          ddl in tran; can be set by user

1024                            0x400                          read only; can be set by user

2048                            0x800                          dbo use only; can be set by user

4096                            0x1000                        single user; can be set by user

8192                            0x2000                        allow nulls by default; can be set by user



There is also an undocumented value which is 320, this is very similar to 256 i.e. database suspect, but it allows you to perform certain functions on the db.

Using set showplan

This section explains how to use and interpret the showplan command to better understand and utilize the SQL Server query optimizer.

When you send a SQL statement to the Sybase SQL Server, the request first goes to a cost-based query optimizer whose job it is to find the most efficient data access path to fulfill the request. To do this, the optimizer examines such information as:

  • The structure of any indices defined on the table(s) involved

  • The distribution of data within the indices

  • The number of rows in the table(s) involved

  • The number of data pages used by the table(s) involved

  • The amount of data cache SQL Server is currently using

  • The access path that would require the least amount of I/O and, therefore, would be the fastest

Once an optimal access path is calculated, it is stored in a query plan within the procedure cache.

The showplan allows you to view the plan the optimizer has chosen and to follow each step that the optimizer took when joining tables, reading from an index or using one of several other methods to determine cost efficiency. To invoke the showplan command, enter:

1> set showplan on

2> go

This command causes SQL Server to display query plan information for every SQL statement executed within the scope of the SQL Server session.

Since the determination of a query plan is performed independently from the actual data retrieval or modification, it is possible to examine and tune query plans without actually executing the SQL statement. This can be accomplished by instructing SQL Server not to execute any SQL statements via the following command:

1> set noexec on

2> go



Note
Issue noexec after showplan or the set showplan command will not execute.



For more information about executing the showplan command, refer to the SQL Server Performance and Tuning Guide.



Note
The showplan command does not function within stored procedures or triggers. However, if you set it to on and then execute a stored procedure or a command that fires a trigger, you can see the procedure or trigger output.



Use the following examples to analyze a query plan. In all cases, examples use the pubs database provided with each SQL Server release.

Interpreting showplan Output

The output of the showplan command consists of many different types of statements, depending on the details of the access path that is being used. The following sections describe some of the more common statements.

STEP n

This statement is added to the showplan output for every query, where n is an integer, beginning with 1. For some queries, SQL Server cannot effectively retrieve the results in a single step, and must break the query plan into several steps. For example, if a query includes a group by clause, the query needs to be broken into at least two steps: one to select the qualifying rows from the table and another to group them.

The following query demonstrates a single-step query and its showplan output:

1> select au_lname, au_fname from authors

2> where city = “Oakland”

3> go

STEP 1

The type of query is SELECT

FROM TABLE

authors

Nested iteration

Table Scan

A multiple-step example is shown in the next section.

The Type of Query Is SELECT (into a Worktable)

This showplan statement indicates that SQL Server needs to insert some of the query results into an intermediate worktable and, later in the query processing, select the values from that table. This is most often seen with a query which involves a group by clause, as the results are first put into a worktable, and then the qualifying rows in the worktable are grouped based on the given column in the group by clause.

The following query returns a list of cities and indicates the number of authors who live in each city. The query plan is composed of two steps: the first step selects the rows into a worktable, and the second step retrieves the grouped rows from the worktable.

1> select city, total_authors = count (*)

2> from authors group by city

3> go

STEP 1
The type of query is SELECT (into a worktable)
GROUP BY
Vector Aggregate
FROM TABLE
authors
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
FROM TABLE
Worktable
Nested iteration
Table Scan

The Type of Query Is query_type

This statement describes the type of query for each step. For most user queries, the value for query_ type is select, insert, update, or delete. If showplan is turned on while other commands are issued, the
query_ type reflects the command that was issued. The following two examples show output for different queries or commands:

1> create table Mytab (col1 int)

2> go

STEP 1

The type of query is CREATE TABLE

1> insert publishers

2> values (“9904”, “NewPubs”, “Nome”, “AL”)

3> go

STEP 1

The type of query is INSERT

The update mode is direct

Table Scan

TO TABLE

publishers

The Update Mode Is Deferred

There are two methods or, modes, that SQL Server can use to perform update operations such as insert, delete, update, and select into. These methods are called deferred update and direct update. When the deferred method is used, the changes are applied to all rows of the table by making log records in the transaction log to reflect the old and new value of the column(s) being modified (in the case of update operations), or the values that will be inserted or deleted (in the case of insert and delete).

When all log records have been constructed, the changes are applied to the data pages. This method generates more log records than a direct update, but it has the advantage of allowing commands to execute which may cascade changes throughout a table. For example, consider a table that has a column col1 with a unique index on it and data values numbered consecutively from 1 to 100 in that column. Execute an update statement to increase the value in each row by one:

1> update Mytable set col1 = col1 + 1

2> go

STEP 1

The type of query is UPDATE

The update mode is deferred

FROM TABLE

Mytable

Nested iteration

Table scan

TO TABLE

Mytable

Consider the consequences of starting at the first row in the table, and updating each row until the end of the table. This violates the unique index. First, updating the first row (which has an initial value of 1) to 2 would cause an error, since 2 already exists in the table. Second, by updating the second row or any row in the table except the last one does the same.

Deferred updates avoid unique index violations. The log records are created to show the new values for each row, the existing rows are deleted and new values are inserted. In the following example, the table authors has no clustered index or unique index:

1> insert authors select * from authors

2> go

STEP 1

The type of query is INSERT

The update mode is deferred

FROM TABLE

authors

Nested iteration

Table Scan

TO TABLE

authors

Because the table does not have a clustered index, new rows are added at the end of the table. The query processor distinguishes between existing rows now in the table (before the insert command) from the rows to be inserted, thus avoiding the continuous loop of selecting a row, inserting it at the end of the table, re-selecting the row just inserted and reinserting it. The deferred insertion method first creates the log records to show all currently existing values in the table. Then SQL Server rereads those log records to insert the rows into the table.

The Update Mode Is Direct

Whenever possible, SQL Server tries to directly apply updates to tables, since this is faster and creates fewer log records than the deferred method. Depending on the type of command, one or more criteria must be met in order for SQL Server to perform the update using the direct method. The criteria are as follows:

  • insert ­ Using the direct method, the table into which the rows are being inserted cannot be a table which is being read from in the same command. The second query example in the previous section demonstrates this, where the rows are being inserted into the same table in which they are being selected from. In addition, if rows are being inserted into the target table, and one or more of the target table’s columns appear in the where clause of the query, then the deferred method, rather than the direct method, will be used.

  • select into ­ When a table is being populated with data by means of a select into command, the direct method will always be used to insert the new rows.

  • delete ­ For the direct update method to be used for delete, the query optimizer must be able to determine that either zero or one row qualifies for the delete. The only way to verify this is to check that one unique index exists on the table, which is qualified in the where clause of the delete command, and the target table is not joined with any other table(s).

  • update ­ For the direct update method to be used for update commands, the same criteria apply as for delete: a unique index must exist so that the query optimizer can determine that no more than one row qualifies for the update, and the only table in the update command is the target table to update. Also, all updated columns must be fixed-length datatypes, not variable- length datatypes. Note that any column that allows null values is internally stored by SQL Server as a variable-length datatype column.

1> delete from authors

2> where au_id = “172-32-1176”

3> go

STEP 1

The type of query is DELETE

The update mode is direct

FROM TABLE

authors

Nested iteration

Using Clustered Index

TO TABLE

authors

1> update titles set type = ‘popular_comp’

2> where title_id = “BU2075”

3> go

STEP 1

The type of query is UPDATE

The update mode is direct

FROM TABLE

titles

Nested iteration

Using Clustered Index

TO TABLE

titles

1> update titles set price = $5.99

2> where title_id = “BU2075”

3> go

STEP 1

The type of query is UPDATE

The update mode is deferred

FROM TABLE

titles

Nested iteration

Using Clustered Index

TO TABLE

titles

Note that the only difference between the second and third example queries is the column of the table which is updated. In the second query, the direct update method is used, whereas in
the third query, the deferred method is used. This difference occurs because of the datatype of the column being updated: the titles.type column is defined as “char(12) NOT NULL” where the titles.price column is defined as “money NULL”. Since the titles.price column is not a fixed-length datatype, the direct method cannot be used.

GROUP BY

This statement appears in the showplan output for any query that contains a group by clause. Queries that contain a group by clause are always two-step queries: the first step selects the qualifying rows into a table and groups them; the second step returns the rows from the table as seen in the following example:

1> select type, avg (advance),

sum(ytd_sales)

2> from titles group by type

3> go

STEP 1

The type of query is SELECT (into a worktable)

GROUP BY

Vector Aggregate

FROM TABLE

titles

Nested iteration

Table Scan

TO TABLE

Worktable

STEP 2

The type of query is SELECT

FROM TABLE

Worktable

Nested iteration

Table Scan

Scalar Aggregate

Transact-SQL includes the aggregate functions avg, count, max, min, and sum. Whenever you use an aggregate function in a select statement that does not include a group by clause, the result is a single value, regardless of whether it operates on all table rows or on a subset of the rows defined in the where clause. When an aggregate function produces a single value, the function is called a scalar aggregate and showplan lists it that way as seen in the
following example:

1> select avg(advance), sum(ytd_sales) from

titles

2> where type = “business”

3> go

STEP 1

The type of query is SELECT

Scalar aggregate

FROM TABLE

titles

Nested iteration

Table scan

STEP 2

The type of query is SELECT

Table Scan

showplan considers this a two-step query, which is similar to the group by output. Since the query contains a scalar aggregate which will return a single value, SQL Server keeps a “variable” internally to store the result of the aggregate function. It can be thought of as a temporary storage space to keep a running total of the aggregate function as the qualifying rows from the table are evaluated. After all rows are evaluated from the table in step 1, the final value of the variable is selected in step 2 to return the scalar aggregate result.

Vector Aggregates

When a group by clause is used in a query that also includes an aggregate function, the aggregate function produces a value for each group. These values are called vector aggregates. The vector aggregate statement from showplan indicates that the query includes a vector aggregate. The following example query includes a vector aggregate:

1> select title_id, avg (qty) from sales

2> group by title_id

3> go

STEP 1

The type of query is SELECT (into a worktable)

GROUP BY

Vector Aggregate

FROM TABLE

sales

Nested iteration

Table Scan

TO TABLE

Worktable

STEP 2

The type of query is SELECT

FROM TABLE

worktable

Nested iteration

Table Scan

from table Statement

This showplan output shows the table from which the query reads. In most queries, the from table is followed by the table’s name. In other cases, it may show that it is selecting from a worktable. The significant fact is that the from table output show the query optimizer’s order for joining tables. The order in which the tables are listed is the order in which the tables are joined. This order often differs from the order in which tables are listed in the query’s from or where clauses. The reason for this is that the query
optimizer checks many join orders for the tables and picks the order that uses the fewest I/Os.

1> select authors.au_id, au_fname, au_lname

2> from authors, titleauthor, titles

3> where authors.au_id = titlesauthor.au_id

4> and titleauthor.title_id =

titles.title_id

5> and titles.type = “psychology”

6> go

STEP 1

The type of query is SELECT

FROM TABLE

TITLES

Nested iteration

Table Scan

FROM TABLE

TITLEAUTHOR

Nested iteration

Table Scan

FROM TABLE

authors

Nested iteration

Table Scan

This query illustrates the join order that the query optimizer chose for the tables, which is not the order listed in either the from or where clauses. By examining the order of the from table statements, it can be seen that the qualifying rows from the titles table are first located with the search clause titles.type = “psychology”. Those rows are then joined with the titleauthor table using the join clause titleauthor.title_id = titles.title_id. Finally, the
titleauthor table is joined with the authors table to retrieve the desired columns using the join clause
authors.au_id = titleauthor.au_id.

to table Statement

When you issue a command that tries to modify one or more table rows, such as insert, delete, update, or select into, the to table statement shows the target table that is being modified. If the operation requires an intermediate step and inserts the rows into a worktable, the to table statement names the worktable instead of the user table.

1> insert sales

2> values (“8042”, “QA973”, “7/15/94”, 7,

3> “Net 30”, “PC1035”)

4> go

STEP 1

The type of query is INSERT

The update mode is direct

TO TABLE

sales

1> update publishers

2> set city = “Los Angeles”

3> where pub_id = “1389”

4> go

STEP 1

The type of query is UPDATE

The update mode is deferred

FROM TABLE

publishers

Nested iteration

Using Clustered Index

TO TABLE

publishers

Note that the showplan for the second query indicates that the publishers table is used for both from table and to table. With update operations, the query optimizer must first read the table containing the row(s) to be updated, resulting in the from table statement, and then must modify the row(s), resulting in the to table statement.

Worktable

For some queries, such as those that require ordered or grouped output, the query optimizer creates its own temporary table called a worktable. The worktable holds all the intermediate results of the query where they are ordered and/or grouped, and then the final select is done. When all results are returned, the table is dropped automatically. The tempdb database holds all temporary tables so the System Administrator may need to increase the size of that database to accommodate very large worktables. For more information about worktables, refer to Chapter 8, “The tempdb Database.”

Since the query optimizer creates these worktables for its own internal use, the worktable names are not listed in the tempdb..sysobjects table.

Nested Iteration

The nested iteration is the default technique used to join table and return rows from a table. It indicates that the query optimizer uses one or more sets of loops to read a table and fetch a row, qualify the row based on the search criteria given in the where clause, return the row to the front end, and then loop again for the next row. The following example shows the query optimizer doing nested iterations through each of the tables in the join:

1> select title_id, tile

2> from titles, publishers

3> where titles.pub_id = publishers.pub_id

4> and publishers.pub_id = ‘1389’

5> go

STEP 1

The type of query is SELECT

FROM TABLE

publishers

Nested iteration

Using clustered index

FROM TABLE

titles

Nested iteration

Table Scan

Table Scan

This showplan statement identifies the method used to fetch the physical result rows from the given table. When the table scan method is used, execution begins with the first row on the table. Then each row is fetched and compared with the conditions set in the where clause, then returned as valid data if the conditions are met. No matter how many rows qualify, every row in the table must be checked, and this causes problems if the table is large (the scan has a high I/O overhead). If a table has one or more indexes on it, the query optimizer may still choose a table scan
instead of reading the index. The following query shows a typical table scan:

1> select au_lname, au_fname

2> from authors

3> go

STEP 1

The type of query is SELECT

FROM TABLE

authors

Nested iteration

Table Scan

Structure of databases as regards loading

INTRODUCTION


The layout of Sybase SQL and ASE Server databases cannot be

easily altered.  A database, once created and altered, carries the

logical layout of data and log segments for the duration of its

life.   A major re-rganization of data and log segments is

possible only by completely rebuilding the database using bcp

or a similar method.   A limited reorganization is possible via

the sp_logdevice stored procedure.


A reorganization of a database’s logical layout is not achievable

via dump and load.  A database created for load must be laid out

in exactly the same way as the database that was dumped, in order

to retain its layout of data and log segments.   In a simplest scenario,

a database created for load must be created and altered in exactly

the same way, and in the same order, with the same size parameters,

(though not necessarily on the same database devices,) as the database

that was dumped.


This note describes  how to create a database and load a dump of

when  the logical layout may be unknown. (It may also be used to

create and  load  a database when the logical layout is known.)


The issues od segments are described in much detail in a white paper

entitled:  “Segment Remapping with load database When Moving a Database”,

doc Id: 1324.   The document is available on the Sybase Web site

www.sybase.com.



BACKGROUND INFORMATION


The internal layout of a database’s data and log pages is stored

in the system table in the master database called sysusages.

If you run the following query:


select * from sysusages where order by lstart

…you’ll see output similar to the following:


dbid segmap lstart  size     vstart     pad unreservedpgs

—- —— —— —– ———– —— ————-


1      7      0  1536           4   NULL           208

1      7   1536  1024        3588   NULL          1016

1      7   2560 10240        6660   NULL         10240

2      7      0  1024        2564   NULL           632

2      7   1024 14336    83886082   NULL         14336

3      7      0  1024        1540   NULL           632

4      3      0 46080   100663298   NULL         40936

4      4  46080 15360   117440514   NULL         15352

4      3  61440  1024   100709378   NULL           512

4      3  62464  2048   100710402   NULL          1024



Each row in sysusages is referred to as a fragment and

represents a contiguous chunk of database space on

a single database device.

segmap 7 indicates data and log are mixed,

3 = data only, 4 = log only, any higher values indicate

user defined segments, any values lower than 3 indicate

that segments have been dropped using sp_dropsegment.


size is expressed in 2K pages; divide it by 512 to obtain megabytes.

lstart indicates the logical start of the segment.

vstart points to the location on the disk.  The vstart value

falls in between the low and high column values in the

sysdevices table  (a join between these two tables will show

the device names and paths.)



Each row in the sysusages table represents one size specification

with a create or alter statement.   There is an exception to this.

When the server is unable to locate contiguous disk space to match

a size specification, it breaks the size into multiple smaller

values and adds multiple fragments to the sysusages table.

These fragments are still located on the same database device.


If any of the sizes end in a 1/2 megabyte,  a special situation exists.

When the server creates a database device and the size requested

is a page larger than the space available on the disk partition,

disk init does not fail.  Instead,  a half megabyte chunk is allocated.

If subsequently a database is created or altered to completely fill

this database device, it will end up with a fragment that ends with

a 1/2 megabyte.   This situation, a quirk in the software, should be

avoided, because as we stated, a database cannot be easily re-organized.

Make sure that the size specified with disk init, is available.

If by chance a database device has been created with a 1/2 megabyte size,

do not fill the device to the last page.  On IBM AIX, in order to accomodate

the Logical Volume Control Block, specify size with the disk init

command as one megabyte less than the size of the partition.




DUMP AND LOAD



Let’s suppose that we wish to dump database with dbid of 4 and load

it into another server.   Or, we have a dump of this database,

no information about its layout (the source server is unavailable,)

and want to load it into another server.   How to proceed?


The key to success is creating the target database with the same

logical layout as the source database contained in the dump.

(This is why Sybase recommends keeping (1) database create and alter

scripts, and (2) bcp and hard copies of key system tables such as

sysdatabases, sysdevices, sysusages and syslogins.)

Keep in mind that if you have a database with 1/2 megabyte chunks

in the sysusages table you will have to create the database devices

in exactly the same way on the target server as they had been

created on the source server.  Otherwise, if 1/2 megabyte fragments

do not exist, the database device sizes, as long as they are large enough

to accomodate the sizes of database fragments, do not come into play.


Let’s examine the layout of database with dbid = 4 in the above example.

The database consists of 4 fragments: 90 mb of data, 30Mb of log,

followed by 2 and 4 mb fragments of data.   This is the permanent

logical organization of the database: logical page 46079 will

always be a data page and it will always be followed by the

first page of the log, logical page 46080.   As we stated, it is not

possible to alter this layout.   Only additions to the bottom

(with appropriate ALTER DATABASE commands) are possible.



SCENARIO 1:   The source sysusages table is available.


If the create and alter scripts are available, edit them to

update the database device information (presumably, the

device names are different on the target server, but they can be the same)

and run them.  Then, load the database.


If the create and alter scripts are unavailable, examine the

source  sysusages table.   We are interested in two columns:

segmap and size.   Create and alter the database in the same order

as the fragments appear in the above query (ordered by lstart)

specifying the same size parameter for each fragment.

Treat any rows with segmap values other than 3, 4 or 7 as data

– these values, stored in the dump, will be overwritten anyway.

Note that any adjacent rows with identical segmap values

may be combined in the create and alter statements into

fewer rows.   Note, also, that the fragments are device

independent – they may be created on any database devices

which have the available space.   In this way, a limited

reorganization of a database is in the end possible.

Once the database has been created, compare the sysusages

tables of the source and target database before loading the dump.

Except where fragments have been combined, the size

columns of rows and the order in which they appear should match exactly.



SCENARIO 2:   The source sysusages table is unavailable.


When the layout of a database contained in a dump is unknown,

you may need to load the dump twice.


First, create the database as well as you can, estimating the sizes of

the segments. Create it with as few fragments as possible.


Load the database.  Next, examine the sysusages table:


select segmap, size from sysusages where dbid = n order by lstart


The output does not necessarily show how the source database was organized,

because some source fragments may have had to be split up to fit the

database as it was created on the target server.  These fragments will

be shown as adjacent with identical segmap values.


If the placement of database segments on database device is unsatisfactory

drop the just loaded database and proceed with the steps below.

Next, create and alter the database in the same order

as the fragments appear in the above query (ordered by lstart)

specifying the same size parameter for each fragment.

Treat any rows with segmap values other than 3, 4 or 7 as data

– these values, stored in the dump, will be overwritten anyway.

Note that any adjacent rows with identical segmap values

may be combined in the create and alter statements into

fewer rows.   Note also, that the fragments are device

independent – they may be created on any database devices

which have the available space.   In this way, a limited

reorganization of a database is in the end possible.


Finally, load the database.



END NOTES


To assure the success of a load it is best to create the

database for load the same size as the source database.

(The target database may not be smaller than the source database.)

If you need to enlarge a database, alter it after the load

has completed.


What then is possible?  Since in the dump and load scenario,

database fragments are independent of the devices on which

they reside, they can be moved among the devices, dispersed

among larger number of devices or else, consolidated on fewer devices.

Remember, the value and order of the logical start of fragments,

the lstart column, is critical,  but the vstart column indicating

the actual physical location of the fragment is enterily irrelevant

to this undertaking.




A guide to Replication server in plain English

Introduction

This guide will hopefully cut to the chase and show how you can set up a simple replication system consisting of 1 dataserver, 2 databases (one replicating one receiving) and 2 replication servers, one for each database.



Installing Replication server

This is quite straight forward;

1)         After unloading the files from the CD you set up the $SYBASE variable and also $PATH to be $SYBASE/bin:$PATH.


2)                  Next run dsedit and add the names and network addresses of the replication server you are building, the ID server (original Repserver) if this is your 2nd or more Repserver, and also any dataservers you will be connecting to.


3)                  The menus are relatively self-explanatory, make sure that if this is the first Replication server in the system that you define it as an ID server.  Also note that the RSSD devices are on the same machine as the dataserver used for the RSSD’s, ideally each Repserver would have its own dataserver, but in this example they also go from the one dataserver.  The disk-partition goes in the same place as the Repserver files.


4)                  After having set all the option and having a complete next to each menu item, you can build the Repserver, if there are any problems you will be notified.



Setting up the first Replication server

This replication server is called phoebe_rep

On this replication server we need to create a connection to the primary database which is called sundry and sits on dataserver phoebe.  This is done using rs_init, and selecting the option add database, the menu’s are self-explanatory, the only thing to look out for is the option deciding whether the database is replicated or not, for this connection select yes, as it is the primary database.  This should create OK and set up the dsi links etc and also the rep-agent.


The connection to a database can be stopped by issuing the following command from the Repserver, which controls it; suspend connection to phoebe.sundry and restarted with resume connection to phoebe.sundry.



Setting up the second Replication server

The second replication server is called phoebe_2nd_rep

On the creation of this second Repserver be sure to make sure the first Replication server is up and running and then point to it as the ID server, also when installed add the replicate database to it using rs_init, for this select no for the question, is this database replicated.  The replicate database in this example is called sundry 3.



Setting up Route

We need to create a route from each primary replication server to each replication server that subscribes to data from it(In this case from phoebe_Rep to phoebe_2nd_rep).

To do this you will need to do the following;

You will need to alter the connection from the RSSD database for the primary Repserver as follows (from the Primary Repserver), suspend connection to phoebe_rep_RSSD

Then alter connection to phoebe_rep_RSSD set log transfer to on followed by resume connection to phoebe_Rep_RSSD .  The reason for this is so that the Replicate Repserver can read any changed to the primary Repserver in terms of new definitions created etc and also get all the ones which already exist.


Next you have to enable a repagent for this database, so log in to the phoebe server and issue the following command; sp_config_rep_agent phoebe_rep_RSSD, enable, phoebe_rep, sa, NULL followed by sp_start_rep_agent phoebe_rep_RSSD

The command to actually create the route can now be issued from the primary Repserver (phoebe_Rep),

Create route to phoebe_2nd_rep

Set username as sa

Set password as NULL


The progress of this route creation can be checked with rs_helproute run from the RSSD for the primary Repserver i.e. from phoebe_rep_RSSD.

If everything is OK this will report the rote as active.



Setting up Replication

The next step is to select the table you want to replicate in the primary database (sundry) which is managed by the primary Repserver (phoebe_rep).  In this example the table is called alpha1 and contains only 2 columns, one is an int and the other a char(10).  To enable this table for replication, issue the following command from database sundry;

sp_setreptable alpha1, true


Incidentally the commands to start and stop the repagent for sundry (which monitors the transactions on the alpha1 table through the syslogs) is sp_start_rep_agent sundry and sp_stop_rep_agent sundry.  Also to replicate a stored procedure (which can be a much more efficient way of doing things) the command is sp_setrepproc proc1, true


You also need to create a table with the same column structure in the replicate database (sundry3) which is managed by the replicate Repserver (phoebe_2nd_rep), i.e.

create table alpha1 (a int, b char(10))

This is also true for replicating stored procs where an identical stored proc must already exists in the replicate database.


Now we can check to see whether all the connections/routes and repagents etc are all up by issuing admin who_is_down at each of the replication servers.  If this is all clear then we are ready to



To create the replication definition the command issued from the primary Repserver is;


create replication definition repdef1

with primary at phoebe.sundry

with all tables named ‘alpha1’

(a int, b char(10))

primary key (a)


You always have to specify a primary key!!



To check that the routing is working fine you can check whether the replication definition has been copied over to the replicate Replication server (phoebe_2nd_rep) by issuing the following command in phoebe_2nd_rep_RSSD on dataserver phoebe;

rs_helprep

Incidentally the stored proc to check on subscriptions is rs_helpsub

Now we need to set up a subscription to this replication definition in the replicate Repserver as follows (command issued from phoebe_2nd_rep);


create subscription sub1

for repdef1

with replicate at phoebe.sundry3


The server will now start materializing this table from the primary database to the replicate database and you can see the progress of this by issuing the following command from the Replicate Repserver;



check subscription sub1

for repdef1

with replicate at phoebe.sundry3


If this comes back as valid then everything is fine


You also check on subscription by issuing rs_helpsub from the replicate RSSD database.



Verifying

The replication process is now set up and to check it insert a few rows into the sundry database alpha1 table and see if they are replicated to the sundry3 alpha1 table.

DR for a Replication Server

Restoring a Primary Database

________________________________________________________________________

Phase I: Clean up the old system


1) Log on to your primary Replication Server and suspend the DSI connection to the

primary database.


suspend connection to srv1.pdb

2) Re-create the primary database.

  • · If your primary database is irretrievably damaged, drop it. If the database is

marked suspect, you must use the dbcc dbrepair command. Drop database will

not execute on a suspect database.

drop database pdb

dbcc dbrepair (pdb,dropdb)


  • · Drop any damaged database devices using sp_dropdevice.
  • · Create new database devices using disk init (if necessary).
  • · Create the new database using create database with the for load option.


3) Restore the primary database.

  • · Load the database from the database dump.

load database pdb from <dump file>

  • · Restore any transaction log dumps.

load transaction pdb from <dump file>

  • · Activate the database using the online database command.

online database pdb

Phase II: Re-Establish Replication

If you have done up-to-the-minute recovery, your primary database should be current, and

you can go directly to Step 8. Steps 4-7 are only necessary if you have not recovered all

of the data in the database.


4) Zero out the LTM truncation point. Use the RSSD for your primary Replication

Server, and run the following command:

rs_zeroltm srv1, pdb

5) Log on to the primary Replication Server and get the generation number for the

primary database. Record this number.

admin get_generation, srv1, pdb

6) Use your primary database, and update the generation number. The new number

should be higher than the old number.

dbcc settrunc(’ltm’, ’gen_id’, <new number>)

The reason for doing this is that the generation number will differ between the RSSD on the Repserver controlling this db and the actual db The reason for doing this is that if you are restoring a primary database to an earlier state,  you have to increment the database generation number so that the Replication  Server does not ignore log records submitted after the database  is reloaded.


7) Recover any lost or inconsistent data.

  • · Recover any data modification lost from the primary database.
  • · Re-synchronize the replicate database:
  • · Re-materialize the replicate database.
  • · Run rs_subcmp.
  • · Replay the transaction log from a log backup.


8) Once the primary and replicate databases are correct, you can resume replication. Log

on to your primary Replication Server and resume the DSI connection to your primary

database.

resume connection to srv1.pdb

9) Restart the RepAgent on your primary database.


10) Review your error logs to ensure that no errors have occurred.












Re-playing the Transaction Log

________________________________________________________________________


1) Log on to your Adaptive Server. Create a new database that has the same size and

segments as your primary database. This will be your temporary recovery database.


2) Shut down the Replication Server and restart it in single-user mode. To do this, you

should have a run server file that contains the –M option.

Once the Replication Server comes up, check the error log to ensure that the server

came up in stand-alone mode.


3) Put the Replication Server into log recovery mode by executing the following

commands:

set log recovery for srv1.pdb

allow connections


4) Configure a RepAgent for the temporary recovery database that you created in Step 1.

Log into your primary data server, use the temporary recovery database, and execute

the following commands:

exec sp_config_rep_agent temp_rep, ‘enable’, ‘rep1’,

’sa’,’’


5) Log on to your primary Adaptive Server and load the temporary database you created

with the database dump of your primary database.

load database temp_rep from ’<dump file>’

go

online database temp_rep


6) Start the RepAgent for the temporary recovery database in recovery mode. Although

you are starting the RepAgent on the temporary recovery database, the ‘connect

database’ is your original primary database:

exec sp_start_rep_agent temp_rep, recovery,

‘srv1’,‘pdb’,’rep1’


7) Verify that the RepAgent has replayed everything that you have loaded. There are

two ways to do this:

  • · Examine the error log of your primary Adaptive Server, looking for a message

that states that recovery of the transaction log is complete.

  • · Run sp_help_rep_agent on the RepAgent for the temporary recovery database:

exec sp_help_rep_agent temp_rep, ‘recovery’

The recovery status should be “not running” or “end of log”.


8) Log on to your primary Adaptive Server and load the temporary database you created

with the first transaction dump of your primary database:

load transaction temp_rep

from ’<dump file>’

9) Restoring the transaction dump will invalidate the LTM truncation point. Reset it by

using the temporary recovery database and running the following command:

dbcc settrunc(‘ltm’,’valid’)


10) Start the RepAgent for the temporary recovery database in recovery mode:

exec sp_start_rep_agent temp_rep, recovery,

‘srv1’,‘pdb’,’rep1’


11) Verify that the RepAgent has replayed everything that you have loaded. There are

two ways to do this:

  • · Examine the error log of your primary Adaptive Server, looking for a message

that states that recovery of the transaction log is complete.

  • · Run sp_help_rep_agent on the RepAgent for the temporary recovery database:

exec sp_help_rep_agent temp_rep, ‘recovery’

The recovery status should be “not running” or “end of log”.


12) Repeat Steps 8, 9, 10, and 11 for each transaction log dump until you have played all

transaction log dumps through the replication system.


13) Check the Replication Server error log for any loss detection messages. If you find

any, then this process has failed, and you must find another way of re-synchronizing

your data in the primary and replicate databases. Your other options are:

  • · Re-materializing the replicate database.
  • · Running rs_subcmp.


14) If you did not find loss detection messages in the previous step, shut down your

Replication Server, and restart it in normal mode. To do this, you should have a run

server file that does not contain the –M option.

If you accidentally bring the server up in single-user mode, there will be a message to

this effect in the error log. Shut Replication Server down, and start it up using a

correct run server file.


15) Re-establish the LTM truncation point in the original primary database. Use the

primary database, and run the following command:

dbcc settrunc(‘ltm’,’valid’)


16) Restart the RepAgent for the original primary database in normal mode.

exec sp_start_rep_agent pdb


17) Log on to your Replication Server and check for any open transactions

admin who,sqt


18) Drop the temporary recovery database, and remove its database devices.




Rebuilding Stable Queues

________________________________________________________________________


1) Log on to your replication server, and drop the damaged partition.

drop partition <partition name>


2) If necessary, add a new partition to replace the disk space. Remember, the file or raw

partition must already exist before you execute this command.

  • · In UNIX, you can use the touch command to create a file.
  • · In Windows NT, you can create a file using a utility program such as Notepad.

add partition <partition name>

on ’<physical_name>’

with size <size>


3) Check the disk space on your Replication Server. Your failed partition should contain

the status DROPPED, while your new partition should be listed as ONLINE.

admin disk_space


4) Rebuild the stable device:

rebuild queues


5) If the connection to your replicate database is suspended, resume it.


6) Examine your Replication Server error log. Wait for the log to display the message

“Rebuild Queues: Complete”.

You may see the error message, “DSI: detecting loss for database ’srv1.pdb’…” If

your RepAgent is not running, this message can be ignored.


7) If the RepAgent for the primary database is not running, start it up. Use your primary

database and run the following:

exec sp_start_rep_agent pdb


8) Examine your Replication Server error log. Check for loss detection messages. If a

data loss was detected, then this process has not fully succeeded, and you have lost

transactions.

  • · Compare the primary and replicate databases, and replace any lost data.
  • · Use the ignore loss command to restart replication.

ignore loss from srv1.pdb to srv1.rdb

If no data loss was detected, Replication Server will end loss detection mode, and

normal replication may continue.


9) Check the disk space used by your Replication Server’s stable device.

admin disk_space

If the damaged partition no longer appears in this display, the file or raw partition that

was used by the damaged partition may be dropped.


Restoring an RSSD Database

________________________________________________________________________

Phase I: Clean up the old system


1) Shut down all RepAgents that connect to the current Replication Server.


2) If the Replication Server is still running, log on to it and shut it down.


3) Re-create the RSSD database.

  • · If your RSSD database is irretrievably damaged, drop it. If the database is marked

suspect, you must use the dbcc dbrepair command. Drop database will not

execute on a suspect database.

drop database rep1_RSSD

dbcc dbrepair (rep1_RSSD,dropdb)

  • · Drop any damaged database devices using sp_dropdevice.
  • · Create new database devices using disk init (if necessary).
  • · Create the new RSSD database using create database with the for load option.


4) Restore the RSSD database.

  • · Load the database from the database dump.

load database rep1_RSSD

from ‘<dump device>’

  • · Load the database from any transaction log dumps. Be sure to apply the

transaction log dumps in order.

load tran rep1_RSSD

from ‘<dump device>’

  • · Activate the database using the online database command.

online database rep1_RSSD


5) If your Replication Server’s RSSD had a RepAgent and you were able to recover the

database up-to-the-minute, re-establish its truncation point and restart your

RepAgents. Use the RSSD database and run the following command:

dbcc settrunc(’ltm’, ’valid’)


6) If your Replication Server’s RSSD had a RepAgent and there was an active route from

this replication server to another one, but you were unable to recover up-to-the-minute,

get the generation number.

admin get_generation, srv1, rep1_RSSD

Phase II: Rebuild the RSSD

If you have done up-to-the-minute recovery, your RSSD database should be current.

Restart your Replication Server and RepAgents. Steps 7-12 are only necessary if you

have not been able to recover the RSSD database to its current state.


7) Shut down the Replication Server and restart it in single-user mode. To do this, you

should have a run server file that contains the –M option. Once the Replication

Server comes up, check the error log to ensure that the server came up in stand-alone

mode.


8) Rebuild the stable device:

rebuild queues


9) Start up the RepAgents for your primary databases in recovery mode. To do this, do

the following:

sp_start_rep_agent pdb1, recovery


10) Examine your Replication Server error log, and check for loss detection messages. If

a data loss was detected:

  • · Compare the primary and replicate databases, and replace any lost data.
  • · Use the ignore loss command to restart replication.

ignore loss from srv1.pdb to srv1.rdb


If no data loss was detected, Replication Server will end loss detection mode, and

normal replication may continue.


11) If this Replication Server is a primary Replication Server, with a route to a

downstream Replication Server, then you must clean up the truncation point in the

RSSD. Remember that a Primary Replication Server’s RSSD is a primary database,

replicating schema information to other RSSDs.

If your Replication Server is the only one in the domain, or if it is a replicate

Replication Server, you do not need to run this last step.

To clean up the truncation point, do the following:

  • · Clear the RSSD’s LTM truncation point:

dbcc settrunc(’ltm’, ’ignore’)

  • · Move the transaction log forward. Execute the following batch multiple times (40

times, for example):

begin tran

checkpoint

commit

If the number of log records does not increase, create a table in the database and

drop it, and then try the above batch again.

  • · Reset the RSSD’s LTM truncation point:

dbcc settrunc(’ltm’, ’valid’)

  • · Increment the generation id number in the RSSD:

dbcc settrunc(’ltm’, ’gen_id’, <new number>)

12) Shut down your Replication Server and your RepAgents, and restart them in normal

mode.

  • · To start your Replication Server in normal mode, you should have a run server file

that does not contain the –M option. If you accidentally bring the server up in single-user

mode, there will be a message to this effect in the error log. Shut Replication

Server down, and start it up using a correct run server file.

  • · To start a RepAgent in normal mode, run the following command:

sp_start_rep_agent pdb







































Avoiding Disaster

________________________________________________________________________

To avoid disasters on your RSSD database:

1) Put your RSSD on database devices that are not being used by any other databases. If

you can, put them on separate disk drives.


2) Separate your RSSD data and log portions on to separate database devices and, if

possible, separate disk drives.


3) Mirror your RSSD database devices.


4) Keep current backups of your RSSD database.


5) Perform transaction backups of your RSSD database to permit up-to-the-minute

recovery.

To avoid disasters on your stable device:

6) Mirror your stable device.

Note: Sybase does not provide a mechanism to mirror your stable device. Your disk

drives or operating system must do this.


7) Always have more disk space in your stable device than is strictly necessary. If

replication fails, having more disk space gives you time to fix the problem before the

stable device fills up.

To avoid disasters on your application databases:

8) Keep current backups of your primary database.


9) If your replicate database is significantly different than your primary database, and

would be difficult to rebuild, keep a current backup of it, also.

Miscellaneous suggestions

10) Document your system, so that you know the names and locations of all important

objects.


11) Create rs_subcmp parameter files for every subscription you are replicating, so that

you don’t need to create them during a crisis.


12) On a regular basis, run rs_subcmp against your primary and replicate tables to ensure

that they truly are synchronized.

How much Procedure cache

To caculate how much procedure cache you need, there’s a set of calculations which can be performed;



The upper size = (Max number of concurrent users) * (Size of the largest plan) * 1.25

The lower size =(Total number of procedures) * (average size of query plan) * 1.25



To work out the largest query plan size execute the following query in the main user database;

select max(count(*)/8 +1)

from sysprocedures

group by id


And to find the average size use this;

select avg(count(*)/8+1)

from sysprocdures

group by id



To calculate the total number of stored procdures execute the following query;

select count(*)

from sysobjects

where type=”p”



Max number of concurrent users is found from the result of sp_configure “max number of users”


Say for example the results came out to be 21MB and 3MB a reasonable figure could be 6MB but obviously if you have 21MB for the procedure cache then that is ideal, this canbe achieved by increasing total memory so that 20 % of total cache is 6MB or by altering the percentage of procedure cache out of total cache.

Common Repserver commands

a quick guide to some of the more common commands you’ll need;



Create connection


create connection to phoebe.sundry

set error class rs_sqlserver_error_class

set function string class rs_function_string_class

set username sundry_maint

set password sundry_maint_ps

with log transfer on


The log transfer on means that it will be able to replicate data as well as receive it.


Also rememebr that you have to add the database to the replication server by using rs_init in the install directory. This installs the stored procs and tables into the db as needed by the repserver.


Create replication definition


create replication definition rep1

with primary at phoebe.sundry

with all tables named ‘alpha1’ (a int, b char(10))

primary key (a)



Create subscription


create sybscription sub1

for rep1

with replicate at phoebe.sundry2



check subscription

check subscription sub1

For rep1

With replicate at phoebe.sundry2


You can also use the command rs_helpsub from the RSSD of the repserver.

resume connection


This restarts the dsi threads etc


resume connection to phoebe.sundry2

drop subscription

drop subscription sub1

for rep1

with replicate at phoebe.sundry2

This sometimes doesn’t work if this is the case then go into the RSSD database for the replication server and delete the relevant line from the rs_subscriptions table, you have to do this if you want to drop the connection.



Admin who


This will tell you what is up or down etc



Admin who_is_down


Tells you specifically what isn’t working

If it’s a connection then try to suspend the connection and resume it, if it’s a repagent then use sp_stop_rep_agent <dbname> from the dataserver and then sp_start_rep_agent <dbname> to get i8t working again, if that fails check the error log.



Rs_subcmp


This is a program which can be found in the bin directory of the Replication server installation, it is used to resyncronise the replicate database with the primary database, after a system failure, the sysadmin manual has an exstensive description of its use.


The best way to run this is to create a config file for it to run against. The config file will look something like;


PDS = TOKYO_DS

RDS = SYDNEY_DS

PDB = pubs2

RDB = pubs2

PTABLE = titleauthor

RTABLE = titleauthor

PSELECT = select au_id, title_id, au_ord, royaltyper from titleauthor order by au_id,\ title_id

RSELECT = select au_id, title_id, au_ord, royaltyper from titleauthor order by au_id,\ title_id

PUSER = repuser

RUSER = repuser

PPWD = piglet

RPWD = piglet

KEY = au_id

KEY = title_id

RECONCILE = Y

VISUAL = Y

NUM_TRIES = 3

WAIT = 10



If this file was called reconcile1.cfg then the command would be

rs_subcmp –f reconcile1.cfg


If you want to reconcile the primary database from the replicate database then you would swap around the PDB and RDB and also the PDS and RDS.  You could either have a separate file for this or use command line options, -d for the replicate database and –D for the primary database and then –s for the replicate dataserver and –S for the primary dataserver.




Sysadmin log_first_tran


This command will find the transaction which failed to be replicated, it’s run from the replication server controlling the database with the problem. It writes the output to the exceptions log. This consists of 3 tables in the RSSD for the Repserver, rs_excepthdr, rs_exceptcmd and rs_exceptslast.   These tables should be purged periodically after having fixed the problem.  You use rs_helpexception to see a list of the exceptions stored in the RSSD. And to delete it you would use rs_delexception.

Duplicate row removal from a table

The way to do this is to create a copy of the table using


In this example I will assume I have a table called apple in database plum, and the column you want to make unique is called fruit

select * into tempdb..apple

from plum..apple


next create an identity column on this new table:

alter table tempdb..apple

add pip numeric(5,0) identity


The following query will now eliminate the duplicate entries of fruit;

delete from tempdb..apple

where pip in (select a.pip from tempdb..apple a, tempdb..apple b

where a.fruit = b.fruit

and a.pip > b.pip)


You will now have the table unique on the column fruit and you can put the data back into the original table (minus the identity column created with the following:


First delete or truncate the original plum..apple

then

Select fruit,… , <columns in plum..apple>

Into plum..apple

From tempdb..apple

Setting up a Historical server recording

Setting up a historical server recording


As a rough guide just to see how it all works after you have the monitor server and historical server running execute the following on the historical server (log in using isql);


Example

Suppose you want to record, once every second, the reads, writes, and total I/O per device on an Adaptive Server that is monitored by the SMS1100 Monitor Server. You can use the following steps to record the data, create a table to accept the data, and then use the bcp utility to import the data into the table. In this example, the view data file that is created by Historical Server for the view is d1_1_device_io.

  1. Use isql to log in to Historical Server.
  2. Record the data by using the following commands:
hs_create_recording_session capella_mon, 30
go
hs_create_view device_io,
    "device name", "value for sample", 
    "device reads", "value for sample",
    "device writes", "value for sample",
    "device i/o", "value for sample"
go
hs_initiate_recording
go
  1. Use isql to log in to Adaptive Server. Because the recording session contains only one view (“device_io”), create just one table in the pubs2 database to store data for that view by using the following transact sql commands:
use history
go
create table device_io
(
sample_time          datetime         not null,
device_name          varchar (255)    null,
device_reads_val     int              null,
device_writes_val    int              null,
device_io_val        int              null,
)
go
  1. After the recording session is complete, you can use bcp to import data from the view’s data file into the database table. Use the following commands:
bcp history..device_io in d1_1_device_io -c 
-e d1_1_device_io.err -U username 
-P password -S server –t\,
 
 
 
 
 
Appendix
 

Table of data items and definitions

Table A-1 describes the data items available for inclusion in recording session views. The table lists data items in alphabetical order and provides the following information about each one:

  • A definition
  • An Adaptive Server release dependency designation
  • A result or key designation

Table A-1: Data items and definitions Data item

Description

Application Execution Class Version: 11.5 and later

Configured execution class, if any, for a given application name. Because of override features, the configured execution class does not necessarily reflect the priority and engine group at runtime. The following notations are used in the monitor:

  • Blank – no execution class is configured for this application.
  • Execution class name – the execution class configured for the application in general, without consideration for specific logins. (That is, an execution class is configured for the application with null scope.)
  • Execution class name followed by asterisk (*) – in addition to the execution class configured for the application in general, additional execution classes are configured for specific logins. (That is, an execution class is configured for the application with null scope and at least one additional execution class is configured for the application with a specific scope.)
  • An asterisk (*) without execution class – no execution class is configured for the application in general, but execution classes are configured for specific logins using this application. (That is, at least one execution class is configured for the application with a specific scope.)

Type: Result

Application Name Version: 11.0 and later

Name of application for which other statistics are being accumulated. Views that contain Application Name report only on processes that are active as of the end of the sample period.

Application name is mutually exclusive with Process ID in a view.

Type: Key

Blocking Process ID Version: 11.0 and later

ID of the process that holds a lock that another process is waiting for, if any. A returned value of zero means that the other process is not blocked.

Type: Result

Cache Efficiency Version: 11.0 and later

The number of cache hits per second per megabyte of a particular data cache.

Type: Result

Cache Hit Pct Version: 11.0 and later

The fraction of the page reads for objects bound to a particular data cache that are satisfied from the cache computed from the following formula:

cache_hits / (cache_hits + cache_misses) * 100

Type: Result

Cache Hits Version: 11.0 and later

The number of times a page read was satisfied from a particular data cache.

Type: Result

Cache ID Version: 11.0 and later

The ID of a data cache in Adaptive Server version 11.0 or later. Particular database tables and indexes may be bound to a specific data cache, or all objects in a database may be bound to the same data cache. No object may be bound to more than one data cache.

Type: Key

Cache Misses Version: 11.0 and later

Number of times a page read was satisfied from disk rather than from a particular data cache. This data item includes failed attempts to locate pages in the data caches during page allocation. Therefore, the number of physical page reads reported may be overstated. If this occurs, the percentage of data cache misses reported by Cache Hit Pct is understated.

Type: Result

Cache Name Version: 11.0 and later

The name of a data cache. Particular database tables and indexes may be bound to a specific data cache, or all objects in a database may be bound to the same data cache. No object may be bound to more than one cache.

Type: Key

Cache Prefetch Efficiency Version: 11.0 and later

A percentage comparing the count of pages being reused in large I/O buffers (the denominator) to the number of those pages that were ever referenced by Adaptive Server. When a buffer is reused, all of the pages in it are counted as reused. Buffers are reused when there are no free buffers in the pool to accept a new physical read from a database device. The number of reused pages referenced by Adaptive Server divided by the result of the number of pages per buffer multiplied by the number of reused buffers indicates the effectiveness of large I/O fetches.

Regardless of how many buffer pools are configured in a named data cache, Adaptive Server only uses two of them. It uses the 2K buffer pool and the pool configured with the largest-sized buffers. Prefetch effectiveness does not apply to the 2K buffer pool, since 2K grabs are not considered large I/O. Therefore, prefetch effectiveness applies to the largest buffer pool in the cache. For example, if a data cache has pools of buffers sized 2K, 8K, and 16K, the 8K pool is not used, and this metric reflects the effectiveness of large I/O in the 16K buffer pool.

If the ratio is large, then prefetching is effective; otherwise, prefetching is not providing much benefit. This may suggest that a buffer pool should be eliminated (or it may imply that a clustered index on some table is fragmented, and that the index should be dropped and re-created).

Type: Result

Cache Refer and Reuse Version: 11.0 and later

The number of pages in buffers that were both referenced and reused. This count is employed when determining the efficiency of prefetching buffers (see Cache Prefetch Efficiency). This data item, unlike Cache Prefetch Efficiency, includes activity in the default 2K buffer pool.

See Cache Prefetch Efficiency for a definition of buffer reuse.

Type: Result

Cache Reuse Version: 11.0 and later

The number of pages in buffers that were reused. A large value indicates a high rate of turnover (of buffers in this memory pool), and suggests that the pool may be too small. A zero value suggests that this memory pool may be too large. This data item, unlike Cache Prefetch Efficiency, includes activity in the default 2K buffer pool.

See Cache Prefetch Efficiency for a definition of buffer reuse.

Type: Result

Cache Reuse Dirty Version: 11.0 and later

The number of times that a buffer that was reused had changes that needed to be written. A non-zero value indicates that the wash size is too small.

See Cache Prefetch Efficiency for a definition of buffer reuse.

Type: Result

Cache Size Version: 11.0 and later

The size of a data cache, in megabytes.

Type: Result

Cache Spinlock Contention Version: 11.0 and later

The fraction of the requests for a data cache’s spinlock that were forced to wait.

spinlock_waits / spinlock_requests

Type: Result

Code Memory Size Version: 11.0 and later

Amount of memory in bytes allocated to Adaptive Server.

Type: Result

Connect Time Version: 11.0 and later

Number of seconds elapsed since the process was started or since the session was started, whichever is smaller.

Type: Result

CPU Busy Percent Version: 11.0 and later

Percentage of total server CPU ticks that the Adaptive Server CPU was busy.

Type: Result

CPU Percent Version: 11.0 and later

If used in a view with Process ID, this is the percentage of time a single process was in the Running state over the time all processes were in the Running state.

If used in a view with Application Name, this is the percentage of time the set of processes running a given application were in the Running state over the time all processes were in the Running state.

Type: Result

CPU Time Version: 11.0 and later

If used in a view with no keys, this data item is the total CPU busy time, in seconds, on the server. If used with keys, this data item is the busy time, in seconds, that was used by each process, application, or engine. Process ID and Application Name are mutually exclusive.

Type: Result

CPU Yields Version: 11.0 or later

Number of times that the Adaptive Server yielded to the operating system.

Type: Result

Current Application Name Version: 11.0 and later

The name of the application that is currently executing on a particular process.

Type: Result

Current Engine Version: 11.0 and later

Number of the Adaptive Server engine on which the process was running most recently.

Type: Result

Current Execution Class

Version: 11.5 and later

The name of the execution class under which a process is currently running.

Type: Result

Current Process State Version: 11.0 and later

Current state of a process. See Process State for definitions of the possible states.

Type: Result

Current Stmt Batch ID Version: 11.5 and later

The ID of a particular query batch being executed on a particular process.

Type: Result

Current Stmt Batch Text Version: 11.5 and later

The text of a particular query batch being executed for a particular process. This text may be only an initial substring of the complete text in a query batch. The amount of text stored in this field is determined by the Adaptive Server max SQL text monitored configuration parameter.

Type: Result

Current Stmt Batch Text Byte Offset Version: 11.5 and later

The byte offset to the beginning of a statement within the query batch or stored procedure being executed for a particular process. If both:

Current Stmt Procedure Database ID is equal to 0 and Current Stmt Procedure ID is equal to 0 then the statement is the currently executing SQL statement in the query batch. Otherwise, the statement is the currently executing SQL statement in the stored procedure uniquely identified by these two IDs.

Type: Result

Current Stmt Batch Text Enabled Version: 11.5 and later

Reports whether Adaptive Server (version 11.5 and later) is saving the SQL text of the currently executing query batches and, if so, how much.

Value of 0 = saving SQL text disabled

Value of 1 or more = maximum number of bytes of batch text per server process that can be saved.

Type: Result

Current Stmt Context ID Version: 11.5 and later

The ID that uniquely identifies a stored procedure invocation within a particular query batch being executed for a particular process.

Type: Result

Current Stmt CPU Time Version: 11.5 and later

The amount of time (in seconds) that the currently executing SQL statement has spent in the running state.

Type: Result

Current Stmt Elapsed Time Version: 11.5 and later

The amount of time (in seconds) that the currently executing SQL statement has been running.

Type: Result

Current Stmt Line Number Version: 11.5 and later

The number of the line (within a query batch or stored procedure) that contains the beginning of the currently executing SQL statement for a particular process. The currently executing SQL statement is in the query batch if both Current Stmt Procedure Database ID is equal to 0 and Current Stmt Procedure ID is equal to 0.

Otherwise, the currently executing SQL statement is in the stored procedure uniquely identified by these two IDs.

Type: Result

Current Stmt Locks Granted After Wait Version: 11.5 and later

Number of lock requests by the currently executing SQL statement that were granted after waiting.

Type: Result

Current Stmt Locks Granted Immediately Version: 11.5 and later

Number of lock requests by the currently executing SQL statement that were granted immediately or were not needed (because sufficient locking was already held by the requestor).

Type: Result

Current Stmt Locks Not Granted Version: 11.5 and later

Number of lock requests by the currently executing SQL statement that were denied.

Type: Result

Current Stmt Logical Reads Version: 11.5 and later

Number of data page reads satisfied from cache or from device reads by the currently executing SQL statement.

Type: Result

Current Stmt Number Version: 11.5 and later

The number of the statement (appearing in a query batch or stored procedure) that is the currently executing SQL statement for a particular process. The currently executing SQL statement is in the query batch if both Current Stmt Procedure Database ID is equal to 0 and Current Stmt Procedure ID is equal 0.

Otherwise, the currently executing SQL statement is in the stored procedure uniquely identified by these two IDs.

A value of 0 indicates partial result data for the currently executing SQL statement. In other words, this SQL statement began executing before monitoring began. Performance metrics are available but numbers reflect only the time period since the start of monitoring.

Type: Result

Current Stmt Page I/O Version: 11.5 and later

Number of combined logical page reads and page writes by the currently executing SQL statement.

Type: Result

Current Stmt Physical Reads Version: 11.5 and later

Number of data page reads by the currently executing SQL statement that could not be satisfied from the data cache.

Type: Result

Current Stmt Procedure Database ID Version: 11.5 and later

The database ID of the stored procedure (including triggers, a special kind of stored procedure) that contains the currently executing SQL statement for a particular process. If the currently executing SQL statement is not contained in a stored procedure, this ID is 0.

Type: Result

Current Stmt Procedure Database Name Version: 11.5 and later

The database name of the stored procedure (including triggers, a special kind of stored procedure) that contains the currently executing SQL statement for a particular process. If the currently executing SQL statement is not contained in a stored procedure, this name is “**NoDatabase**”.

Type: Result

Current Stmt Procedure ID Version: 11.5 and later

The ID of the stored procedure (including triggers, a special kind of stored procedure) that contains the currently executing SQL statement for a particular process. If the currently executing SQL statement is not contained in a stored procedure, this ID is 0.

Type: Result

Current Stmt Procedure Name Version: 11.5 and later

The name of the stored procedure (including triggers, a special kind of stored procedure) that contains the currently executing SQL statement for a particular process. If the currently executing SQL statement is not contained in a stored procedure, this name is “**NoObject**”.

Type: Result

Current Stmt Procedure Owner Name Version: 11.5 and later

The owner name of the stored procedure (including triggers, a special kind of stored procedure) that contains the currently executing SQL statement for a particular process. If the currently executing SQL statement is not contained in a stored procedure, this name is “**NoOwner**”.

Type: Result

Current Stmt Procedure Text Version: 11.5 and later

The text of a particular stored procedure (including triggers, a special kind of stored procedure) being executed for a particular process. If both Current Stmt Procedure Database ID is equal to 0 and Current Stmt Procedure ID is equal to 0 then a stored procedure is not currently executing and this text is a null-terminated empty string (“”).

If the text is not available (because this stored procedure was compiled and its text was discarded, or because the text is stored in an encrypted format), then this text is a null-terminated empty string (“”).

Type: Result

Current Stmt Query Plan Text Version: 11.5 and later

The text of the query plan for a particular query being executed for a particular process.

If the text is not available (because the Adaptive Server has removed this plan from its catalog of query plans), then this text is a null-terminated empty string (“”).

Type: Result

Current Stmt Start Time Version: 11.5 and later

The date and time, in the time zone of Adaptive Server, when the currently executing SQL statement began running.

If this SQL statement began running before monitoring began, then this result is the date and time that activity was first encountered for this statement.

Type: Result

Current Stmt Text Byte Offset Version: 11.5 and later

The byte offset to the beginning of a statement within the query batch or stored procedure being executed for a particular process.

If both Current Stmt Procedure Database ID is equal to 0 and Current Stmt Procedure ID is equal to 0, then the statement is the currently executing SQL statement in the query batch. Otherwise, the statement is the currently executing SQL statement in the stored procedure uniquely identified by those two IDs.

Type: Result

Database ID Version: 11.0 and later

Unique identification of a database.

Type: Key

Database Name Version: 11.0 and later

Name of the database.

Type: Result

Deadlock Count Release: 11.0 and later

Number of deadlocks.

Type: Result

Demand Lock Version: 11.0 and later

A character string (Y or N) that indicates whether or not a lock that has been upgraded to demand lock status.

Type: Result

Device Hit Percent Version: 11.0 and later

The fraction of device requests is computed by multiplying the quotient of device hits divided by device misses plus device misses by 100.

Type: Result

Device Hits Version: 11.0 and later

Number of times access to a device was granted.

Type: Result

Device I/O Version: 11.0 and later

Combination of device reads and device writes.

Type: Result

Device Misses Version: 11.0 and later

Number of times access to a device had to wait.

Type: Result

Device Name Version: 11.0 and later

Name of a database device defined in Adaptive Server.

Type: Key

Device Reads Version: 11.0 and later

Number of reads made from a device.

Type: Result

Device Writes Version: 11.0 and later

Number of writes made to a device.

Type: Result

Elapsed Time Version: 11.0 and later

The time increment, in seconds, either from one data refresh to the next (sample) or from the creation of the view to the present (session).

Type: Result

Engine Number Version: 11.0 and later

Number of the Adaptive Server engine.

Type: Key

Host Name Version: 11.0 and later

The name of the host computer that established a particular connection.

Type: Result

Index Logical Reads Release: 11.0 and later

Number of index page reads satisfied from cache and from database devices.

Type: Result

Index Physical Reads Version: 11.0 and later

Number of index page reads that could not be satisfied from the data cache.

Type: Result

Kernel Process ID Version: 11.0 and later

An Adaptive Server process identifier that remains unique over long periods of time.

Type: Key

Kernel Structures Memory Size Version: 11.0 and later

Amount of memory in bytes allocated to the kernel structures.

Type: Result

Large I/O Denied Version: 11.0 and later

The number of times the buffer manager did not satisfy requests (of the optimizer) to load data into a buffer in this data cache by fetching more than one contiguous page from disk at a time.

Type: Result

Large I/O Performed Version: 11.0 and later

The number of times the buffer manager satisfied requests (of the optimizer) to load data into a buffer in this data cache by fetching more than one contiguous page from disk at a time.

Type: Result

Large I/O Requested Version: 11.0 and later

The number of times the optimizer made requests (of the buffer manager) to load data into a buffer in this data cache by fetching more than one contiguous page from disk at a time.

Type: Result

Lock Count

Version: 11.0 and later

Number of locks.

Type: Result

Lock Hit Percent

Version: 11.0 and later

Percentage of successful requests for locks.

Type: Result

Lock Result Version: 11.0 and later

Result of a logical lock request. Lock result values are:

  • 1 – granted immediately.
  • 2 – not needed; requestor already held a sufficient lock.
  • 3 – waited; requestor waited.
  • 4 – did not wait; lock was not available immediately and the requestor did not want the lock request to be queued.
  • 5 – deadlock; requestor selected as deadlock victim.
  • 6 – interrupted; the lock request was interrupted by attention condition.

Type: Key

Lock Results Summarized Version: 11.0 and later

Lock results summarized at a granted or not granted level.

  • 1 – the lock result summary granted is composed of the lock results: granted, not needed, and waited.
  • 2 – the lock result summary not granted is composed of the lock results: did not wait, deadlock, and interrupted.

Type: Key

Lock Status Version: 11.0 and later

Current status of a lock which includes lock status values:

  • 1- held and blocking.
  • 2 – held and not blocking.
  • 3 – requested and blocked.
  • 4 – requested and not blocked.

Type: Key

Lock Status Count

Version: 11.0 and later

Number of locks in each lock status. This is a snapshot value.

Type: Result

Lock Type Version: 11.0 and later

Adaptive Server protects tables or data pages currently used by active transactions by locking them. Adaptive Server employs the following lock types:

  • 1 – exclusive table lock.
  • 2 – shared tablelock.
  • 3 – exclusive intent lock.
  • 4 – shared intent lock.
  • 5 – exclusive page lock.
  • 6 – shared page lock.
  • 7 – update page lock.
  • 8 – exclusive row lock.
  • 9 – shared row lock.
  • 10 – update row lock.

Type: Key

Locks Being Blocked Count Version: 11.0 and later

Number of locks being blocked by this process that holds this “held_and_blocking” lock.

Type: Result

Locks Granted Immediately Version: 11.5 and later

Number of locks that were granted immediately, without having to wait for another lock to be released.

Type: Result

Locks Granted After Wait Version: 11.5 and later

Number of locks that were granted after waiting for another lock to be released.

Type: Result

Locks Not Granted Version: 11.5 and later

Number of locks that were requested but not granted.

Type: Result

Log Contention Percent Version: 11.0 and later

The percentage of times, of the total times when a user log cache was flushed into the transaction log, that it had to wait for the log semphore.

A high percentage may indicate that the user log cache size should be increased.

Type: Result

Logical Page Reads Version: 11.0 and later

Number of data page reads per unit of time, whether satisfied from cache or from a database device.

Type: Result

Login Name Version: 11.0 and later

Login name associated with Adaptive Server processes.

Type: Result

Most Active Device I/O Version: 11.0 and later

Number of combined reads and writes against the device with the most activity during a given time interval.

Type: Result

Most Active Device Name Version: 11.0 and later

Name of the device with the largest number of combined reads and writes during a given time interval.

Type: Result

Net Bytes Received Version: 11.0 and later

Number of network bytes received.

Type: Result

Net Bytes Sent Version: 11.0 and later

Number of network bytes sent.

Type: Result

Net Default Packet Size Version: 11.0 and later

Default network packet size.

Type: Result

Net I/O Bytes Version: 11.0 and later

Total number of network bytes sent and received.

Type: Result

Net Max Packet Size Version: 11.0 and later

Configured maximum size for a network packet.

Type: Result

Net Packet Size Received Version: 11.0 and later

Average size of network packets received.

Type: Result

Net Packet Size Sent Version: 11.0 and later

Average size of network packets sent.

Type: Result

Net Packets Received Version: 11.0 and later

Number of network packets received.

Type: Result

Net Packets Sent Version: 11.0 and later

Number of network packets sent.

Type: Result

Number of Engines Version: 11.0 and later

Number of engines configured for Adaptive Server.

Type: Result

Number of Processes Version: 11.0 and later

Number of processes currently running on Adaptive Server, or, if used with the key Application Name, the number of processes currently running a given application.

Type: Result

Object ID Version: 11.0 and later

ID of a database object. The object returned is a database table, a stored procedure, or a temporary table.

Object IDs might be negative numbers. The object IDs that Adaptive Server assigns to temporary tables can be positive or negative.

Type: Key

Object Name Version: 11.0 and later

Database object name. The string **TempObject** is reported for temporary tables.

Type: Result

Object Type Version: 11.0 and later

Type of database object:

  • 0 – none.
  • 1 – stored procedure (including triggers).
  • 2 – table.

Type: Result

Owner Name Version: 11.0 and later

Name of an objects’s owner.

Type: Result

Page Cache Size Version: 11.0 and later

Amount of memory in bytes allocated for the page cache.

Type: Result

Page Hit Percent Version: 11.0 and later

Percentage of times that a data page read could be satisfied from cache without requiring a physical page read.

Type: Result

Page I/O Version: 11.0 and later

Combined total of logical page reads and page writes.

Type: Result

Page Number Version: 11.0 and later

Data page number for a particular lock or lock request.

Type: Key

Page Writes Version: 11.0 and later

Number of pages written to a database device.

Type: Result

Physical Page Reads Version: 11.0 and later

Number of data page reads that could not be satisfied from the data cache.

Type: Result

Procedure Buffer Size Version: 11.0 and later

Amount of memory in bytes allocated for the procedure buffer.

Type: Result

Procedure CPU Time Version: 11.0 and later

Number of seconds of CPU time spent executing a stored procedure.

Type: Result

Procedure Database ID Version: 11.0 and later

Database ID of the active stored procedure.

Type: Key

Procedure Database Name Version: 11.0 and later

Database name of the active stored procedure.

Type: Key

Procedure Elapsed Time Version: 11.0 and later

Number of seconds elapsed during a stored procedure execution. All statistic types valid with this data item report time in units of seconds. For example, “Procedure Elapsed Time”, “Average for Session” reports the average number of elapsed seconds per procedure execution.

Type: Result

Procedure Execution Class Version: 11.5 and later

Configured execution class, if any, for a given stored procedure.

Type: Result

Procedure Execution Count Version: 11.0 and later

Number of times a stored procedure, or a line in a stored procedure, was executed.

Type: Result

Procedure Header Size Version: 11.0 and later

Amount of memory in bytes allocated for the procedure header.

Type: Result

Procedure Hit Percent Version: 11.0 and later

Percentage of times that a procedure execution found the procedure’s query plan in procedure cache and available for use.

Type: Result

Procedure ID Version: 11.0 and later

Active stored procedure. Active indicates the top-level stored procedure was called.

Type: Key

Procedure Line Number Version: 11.0 and later

Stored procedure line number.

Type: Key

Procedure Line Text Version: 11.0 and later

Entire text of the stored procedure.

Type: Result

Procedure Logical Reads Version: 11.0 and later

Number of requests to execute a stored procedure, whether satisfied from procedure cache or with a read from sysprocedures.

Type: Result

Procedure Name Version: 11.0 and later

Name of the active stored procedure.

Type: Result.

Procedure Owner Name Version: 11.0 and later

Name of the owner of the active stored procedure.

Type: Result

Procedure Physical Reads Version: 11.0 and later

Number of requests to execute a stored procedure, for which a read from sysprocedures was necessary.

Type: Result

Procedure Statement Number Version: 11.0 and later

Statement number within a stored procedure. A single stored procedure line can contain one or more statements.

Type: Key

Process ID Version: 11.0 and later

Adaptive Server process identification number. Views that contain Process ID only report on processes that are active as of the end of the sample period. Process ID is mutually exclusive with Application Name in a view.

Type: Key

Process State Version: 11.0 and later

Process state:

  • 0 – None.
  • 1 – alarm sleep.Waiting on an alarm.
  • 2 – background. Adaptive Server process executing.
  • 3 – bad status. Undetermined error condition.
  • 4 – infected. Tagged by Adaptive Server as unprocessable.
  • 5 – lock sleep. Waiting on a lock acquisition.
  • 6 – received sleep. Waiting on a network read.
  • 7 – runnable. Waiting to run according to priority and availability of CPU.
  • 8 – running. Executing.
  • 9 – send sleep. Waiting on a network send.
  • 10 – sleeping. Paused for any other reason not listed here, such as: waiting on device I/O (physical reads) or waiting for client activity.
  • 11 – stopped. Process terminated.
  • 12 – terminating. Process terminating.
  • 13 – unknown. Process state undeterminable.
  • 14 – remote I/O. Waiting on a remote (OMNI) server to complete an operation.
  • 15 – synch sleep. Waiting to synchronize with some other server process(es) that are working in parallel to execute a given query.

Type: Key

Process State Count Version: 11.0 and later

Number of processes in a particular state.

Type: Result

Rows Deleted Version: 11.0 and later

Number of rows deleted from a database table.

Type: Result

Rows Deleted Deferred Version: 11.0 and later

Number of rows deleted from a database table in deferred mode.

Type: Result

Rows Deleted Direct Version: 11.0 and later

Number of rows deleted from a database table in direct mode.

Type: Result

Rows Inserted Version: 11.0 and later

Insertions into a database table.

Type: Result

Rows Inserted Clustered Version: 11.0 and later

Insertions into database tables with clustered indexes.

Type: Result

Rows Inserted Heap Version: 11.0 and later

Insertions into database tables without clustered indexes.

Type: Result

Rows Updated Version: 11.0 and later

Updates to a database table.

Type: Result

Rows Updated Deferred Version: 11.0 and later

Updates that require two steps to complete. First, records for deleting the existing entry and inserting the new entry are written to the log, but only the deletes are actually performed on the data pages. In the second step, the log is rescanned and the insert operations performed on the data pages.

Type: Result

Rows Updated Direct Version: 11.0 and later

The sum of expensive, in-place, and not-in-place updates (everything except updates deferred).

Type: Result

Rows Updated Expensive Version: 11.0 and later

A type of direct update that deletes a row from its original location and inserts it in a new location.

Rows Updated In Place Version: 11.0 and later

A type of direct update that does not require rows to move on a data page.

Type: Result

Rows Updated Not In Place Version: 11.0 and later

A type of direct update that does not require the updated row to move, but because the length of the updated row changes, other rows on the data page are moved. Also known as cheap updates.

Type: Result

Select Statements

Version: 11.0 and later

Number of SELECT or OPEN CURSOR statements.

Type: Result

Server Structures Size Version: 11.0 and later

Amount of memory in bytes allocated for the Adaptive Server structures.

Type: Result

SQL Server Name Version: 11.0 and later

Name of the Adaptive Server that is being monitored as specified by the -S parameter in the start-up command for the Monitor Server being used.

Type: Result

SQL Server Version Version: 11.0 and later

Version of the Adaptive Server that is being monitored. For more information, see the global @@version variable in the Transact-SQL User’s Guide.

Type: Result

Thread Exceeded Max Version: 11.5 and later

The number of times a query plan was adjusted at runtime because of attempting to exceed the configured limit of threads in the server-wide worker thread pool in Adaptive Server version 11.5 and later.

Type: Result

Thread Exceeded Max Percent Version: 11.5 and later

The percentage of time a query plan was adjusted at runtime because of attempting to exceed the configured limit of threads in the server-wide worker thread pool in Adaptive Server version 11.5 and later.

Type: Result

Thread Max Used Version: 11.5 and later

The maximum number of threads from the server-wide worker thread pool that were concurrently in use on the server.

Type: Result

Time Waited on Lock Version: 11.0 and later

Amount of time (in seconds) that a lock request has been waiting.

Type: Result

Timestamp Version: 11.0 and later

In recording session views and in playback views when summarization_level is raw, the date and time on the Adaptive Server when the recording session data was gathered.

In playback views, when summarization_level is actual, entire, or a user-defined interval, the time is converted to the time zone of Historical Server.

For more information, see the getdate() function in the Transact-SQL User’s Guide.

Type: Result

Timestamp Datim Version: 11.0 and later

In recording session views and in playback views when summarization_level is raw, the date and time on the Adaptive Server when the recording session data was gathered, returned in a CS_DATETIME structure. For more information, see the getdate() function in the Transact-SQL User’s Guide.

In playback views, when summarization_level is actual, entire, or a user-defined interval, the time is converted to the time zone on Historical Server.

Type: Result

Transactions Version: 11.0 and later

Total number of committed Transact-SQL statement blocks delimited by a begin transaction and commit transaction statement.

Type: Result

 


Installing Sybase Auditing

Sybase Auditing

  1. Create the auditing devices and auditing database with the Transact-SQL disk init and create database commands. For example:

USE master

go

DISK INIT

NAME=’sybsecurity_data1′,

PHYSNAME=’/sybdevices/cids/sybsecurity_data1.dat’,

VDEVNO=60,

SIZE=5120,

VSTART=0,

CNTRLTYPE=0,

DSYNC=true

go

EXEC sp_diskdefault ‘sybsecurity_data1’,defaultoff

go


USE master

go

DISK INIT

NAME=’sybsecurity_data2′,

PHYSNAME=’/sybdevices/cids/sybsecurity_data2.dat’,

VDEVNO=61,

SIZE=5120,

VSTART=0,

CNTRLTYPE=0,

DSYNC=true

go

EXEC sp_diskdefault ‘sybsecurity_data2’,defaultoff

go


USE master

go

DISK INIT

NAME=’sybsecurity_data3′,

PHYSNAME=’/sybdevices/cids/sybsecurity_data3.dat’,

VDEVNO=62,

SIZE=5120,

VSTART=0,

CNTRLTYPE=0,

DSYNC=true

go

EXEC sp_diskdefault ‘sybsecurity_data3’,defaultoff

go


USE master

go

DISK INIT

NAME=’sybsecurity_log1′,

PHYSNAME=’/sybdevices/cids/sybsecurity_log1.dat’,

VDEVNO=63,

SIZE=2560,

VSTART=0,

CNTRLTYPE=0,

DSYNC=true

go


EXEC sp_diskdefault ‘sybsecurity_log1’,defaultoff

USE master

go



CREATE DATABASE sybsecurity

ON sybsecurity_data1=10

LOG ON sybsecurity_log1=5

go

USE master

go

EXEC sp_dboption ‘sybsecurity’,’trunc log on chkpt’,true

go

EXEC sp_dboption ‘sybsecurity’,’abort tran on log full’,true

go

use sybsecurity

go

EXEC sp_changedbowner ‘sa’

Go

checkpoint

go

–          Use isql to execute the installsecurity script:

o   cd $SYBASE/ASE-12_5/scripts
setenv DSQUERY server_name
isql -Usa –PpasswordSserver_name < installsecurity


–          Shut down and restart Adaptive Server to initialise the auditing subsystem


–          When you have completed these steps, the sybsecurity database has one audit table (sysaudits_01) created on its own segment. You can enable auditing at this time, but should add more auditing tables with sp_addaudittable.


–          Now Create the extra sysaudits tables in sybsecurity:

Extend the sybsecurity database to the device you initialized in step 1.

use master

go

alter database sybsecurity on sybsecurity_data2= 10

go

use sybsecurity

go

sp_addaudittable ‘sybsecurity_data2’

go


use master

go

alter database sybsecurity on sybsecurity_data3= 10

go

use sybsecurity

go

sp_addaudittable ‘sybsecurity_data3’

go



–          Create the repository database sybaudit and the repository table audit_data

USE master

go

CREATE DATABASE sybaudit

ON DATA1=500

LOG ON LOG1=50

go

USE master

go

EXEC sp_dboption ‘sybaudit’,’select into/bulkcopy/pllsort’,true

go

EXEC sp_dboption ‘sybaudit’,’trunc log on chkpt’,true

go

USE sybaudit

go

CHECKPOINT

go

USE sybaudit

go

EXEC sp_changedbowner ‘sa’

Go



–          Create the audit_data table in sybaudit

use sybaudit

go

CREATE TABLE dbo.audit_data

( event     smallint     NOT NULL,

eventmod  smallint     NOT NULL,

spid      smallint     NOT NULL,

eventtime datetime     NOT NULL,

sequence  smallint     NOT NULL,

suid      int     NOT NULL,

dbid      smallint     NULL,

objid     int          NULL,

xactid    binary(6)    NULL,

loginname varchar(30)  NULL,

dbname    varchar(30)  NULL,

objname   varchar(255)  NULL,

objowner  varchar(30)  NULL,

extrainfo varchar(255) NULL,

nodeid    tinyint      NULL

)

LOCK ALLPAGES

Go


–          Add a sp_thresholdaction procedure to each of the segments created above, use the example below;

Use sybsecurity

go

create procedure sp_audit_thresholdaction

as

declare @audit_table_number int

select @audit_table_number = value

from master.dbo.sysconfigures

where name = “current audit table”


exec sp_configure “current audit table”, 0, “with truncate”


if @audit_table_number = 1

insert sybaudit.dbo.audit_data

select * from sysaudits_01

truncate table sysaudits_01

if @audit_table_number = 2

insert sybaudit.dbo.audit_data

select * from sysaudits_02

truncate table sysaudits_02

if @audit_table_number = 3

insert sybaudit.dbo.audit_data

select * from sysaudits_03

truncate table sysaudits_03

return(0)



–          To apply this action to the three segments issue the following command;

Use sybsecurity

go

EXEC sp_addthreshold ‘sybsecurity’,’aud_seg_01′,200,’sp_audit_thresholdaction’

go

EXEC sp_addthreshold ‘sybsecurity’,’aud_seg_02′,200,’sp_audit_thresholdaction’

go

EXEC sp_addthreshold ‘sybsecurity’,’aud_seg_03′,200,’sp_audit_thresholdaction’

go



–          Next create a procedure to flush all the data from sybsecurity into the table on sybaudit;

Use sybaudit

Go

IF OBJECT_ID(‘dbo.sp_aud_flush_display’) IS NOT NULL

BEGIN

DROP PROCEDURE dbo.sp_aud_flush_display

IF OBJECT_ID(‘dbo.sp_aud_flush_display’) IS NOT NULL

PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_aud_flush_display >>>’

ELSE

PRINT ‘<<< DROPPED PROCEDURE dbo.sp_aud_flush_display >>>’

END

go


CREATE PROCEDURE dbo.sp_aud_flush_display

AS

BEGIN

exec sybsecurity..sp_audit_thresholdaction

exec sybsecurity..sp_audit_thresholdaction

exec sybsecurity..sp_audit_thresholdaction

select distinct loginname from audit_data

END

go


EXEC sp_procxmode ‘dbo.sp_aud_flush_display’,’unchained’

go




–          Set the following parameters in the Sybase server;

sp_configure “auditing”, 1

go

sp_configure “suspend audit when device full”, 0

go

–this enables auditing

sp_configure “audit queue size”, 100

go

–this sets up the queue size for handling auditing events

–Now To audit the sa_role logins issue the following command;

sp_audit “all”, “sa_role”, “all”, “on”

go