Installing Heterogeneous Replication

To perform any heterogeneous replication, replication to and from two different datasources e.g Sybase to Oracle or DB2 to Oracle, you need to use Direct Connect or as it’s now being called Enterprise Connect Data Access.

The below example is how to set up a connection to a new Oracle database but it can be applied to other databases as well obviously with some different scripts etc.

In the below example Direct connect is already installed but unloading the software is not so difficult anyway, please refer to product manuals for details.


1.       First you will need to create a new Direct Connect process by creating a new config file in the $SYBASE/$SYBASE_DCO/install directory on the server where Direct Connect is installed. Just copy the existing .cfg file and RUN file and alter to them to reflect the new settings. So in the RUN file just change the servername and in the .cfg file add the new servername, username (in this case the oracle username), the path to the new errorlog file and the connect_string.


2.       Add an entry for this new Direct Connect process in the local interfaces file like:

DCOCCDWP

master tcp ether p-***-tm16 ****

query tcp ether p-***-tm16 ****


3.       Also add this into the interfaces file for the REPServer and the ASE server which is acting as the RSSD for the Repserver.


4.       Add the Oracle connection information into the tnsnames.ora file which is in $SYBASE/$SYBASE_DCO/network/admin :

DWMU =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = TCP)(HOST = ***.**.***.**)(PORT = ****))

)

(CONNECT_DATA =

(SERVICE_NAME = DWMU)

)

)


5.       You can now start the new Direct Connect process by running e.g startserver –f RUN_DCODWMU.


6.       You can check that it is working by trying to log into the oracle tablespace with e.g isql –Udefacto –SDCOCCDWP –P<oracle password>.


7.       You now need to go to the Repserver and make a copy of the following files in the $SYBASE/$SYBASE_REP/scripts directory :

hds_oracle_udds.sql

hds_clt_ase_to_oracle.sql

hds_oracle_funcstrings.sql

hds_oracle_setup_for_replicate.sql

hds_oracle_connection_sample.sql


For the first three scripts you need to edit the scripts and enter your RSSD database name and execute the scripts against the relevant ASE server housing the RSSD database for the Repserver, e.g isql –Usa – Sdhp_SOLP –DREP_dhpsolo_RSSD -i$SYBASE/$SYBASE_REP/scripts/DCOCCDWP_ hds_oracle_udds.sql


The script hds_oracle_setup_for_replicate.sql needs to be copied over to the direct connect unix box and is run via Direct Connect against the replicate database, e.g isql -Ucm –P<oracle password> -SDCODWMU -ihds_oracle_setup_for_replicate.sql


8.       The last step is to create the connection to the Oracle database from the repserver and that is done by modifying the hds_oracle_connection_sample.sql script so that it looks similar to:

create connection to DCOCCDWP.defacto

set error class to rs_sqlserver_error_class

set function string class to rs_oracle_function_class

set username to *******

set password to **********

set ‘batch’ to ‘off’

go

You then run this script against the repserver.


9.       To test that everything is ok you can try to connect to the oracle database form the replication server with e.g  isql –Udefacto –SDCOCCDWP –P<oracle password>

If this connects then everything should work, at least in terms of connectivity. If it doesn’t work then you will probably have to open up the firewall both ways between the replication server port and the direct server port, e.g between pkg_solp (***.**.***.***) repserver port (2040) and dhp-tm16 (**.***.*.**) dco port (****).


10.   You then continue to define the rep defs, subscriptions etc.

How to add/update a table for replication

How to add/update a table for replication


–          First you need to create two scripts one to drop the existing subscription and replication definition and one to create the new table, replication definition and subscription.

–          You also need to extract the exiting replicant table and also a user, e.g.  origomw, if requested and save this info.

–          Before stopping replication send an email to operations to let them know of the planned downtime.

–          At this point you should suspend the replication server connection from DB2, if it’s not already down and wait 5 minutes to make sure everything is applied.

–          Do a select count(*) against an existing replicant table and compare this with a wc –l of the bcp file to make sure they match in number. For a new table this is not relevant.

–          Bcp out the existing table just in case something goes wrong.

–          Drop a user if that was requested

–          Drop subscription, rep def in rep server:

o   drop subscription DB2_ASN2_RESOURCE_NB_s2 for DB2_ASN2_RESOURCE_NB_d2

o   with replicate at origo_test.deFaktoReplica

o   without purge

o   go

o

o   drop replication definition “DB2_ASN2_RESOURCE_NB_d2”

o   go

–          Drop the table in ASE

o   Use database deFaktoReplica

o   Go

o   Drop table ASN2_RESOURCE_NB

o   Go

–          Create the table again

–          Bcp the data in with a command like:

o   bcp deFaktoReplica.dbo.ASN2_RESOURCE_NB  in /home/origo/deFaktoReplica/init/data/SYS3.ASN2.DDX -r ” \n” -t “\t” -c -Jiso_1 -Sorigo_test –Usa –P<password> -e bcplog.txt -z -b 10000 -m 100000

o   If you get problems check that the number of columns etc match and also check to see whether the row/tab delimiter is correct, sometimes there is an extra tab in the row delimiter, in that case try using –r “\t \n”.

–          Add the primary key, triggers and other indexes etc into the table, and possibly also a rep server specific column like changed_date.

–          Add the origomw user to the database.

–          Add the replication definition and subscription.

o   create replication definition “DB2_ASN2_RESOURCE_NB_d2”

o   with primary at TSTA.P825RAD1

o   with primary table named “DB2_ASN2_RESOURCE_NB”

o   with replicate table named “ASN2_RESOURCE_NB”

o   (

o   “RESR_ELMT_ID”          int,

o   “RECORD_EFF_END_DT”     datetime,

o   “RECORD_EFF_END_TM”     datetime,

o   “RECORD_EFF_STR_DT”     datetime,

o   “RECORD_EFF_STR_TM”     datetime,

o   “SRVC_LOC_ID”           numeric,

o   “RESR_TYPE”             char(6),

o   “RESR_ID”               char(22),

o   “ACCT_ID”               numeric,

o   “RESR_EFF_STR_DT”       datetime,

o   “RESR_EFF_END_DT”       datetime,

o   “ORD_ITEM_ID”           numeric,

o   “FST_USG_DT”            datetime,

o   “REINSTATE_CD”          char(4),

o   “REINSTALL_DT”          datetime,

o   “DISCN_CD”              char(4),

o   “ASN_RESR_DISCN_DT”     datetime,

o   “VERBAL_TRANSLATION”    char(20),

o   “COMMENT_ID”            numeric,

o   “REF_SEQ_NUM”           numeric,

o   “INIT_INSTALL_DT”       datetime,

o   “LAST_CHANGE_DT”        datetime,

o   “RESR_GRP_TYP”          char(6),

o   “ORD_ITEM_SEQ”          smallint,

o   “PRIORITY_CD”           char(1),

o   “SUB_STATUS_CD”         char(2),

o   “PRMRY_COMP_CD”         char(6),

o   “SECNDRY_COMP_CD”       char(6),

o   “NUFS_NET_SRVC_TYP”     char(3),

o   “NUFS_NUM_CAT”          char(2),

o   “USER_ID”               char(8),

o   “PREV_PHONE_NUM”        char(8),

o   “RSU”                   char(6),

o   “PRMRY_GRP”             smallint,

o   “FSL”                   char(6),

o   “CALL_INTERCEPT”        char(1),

o   “RESR_SUB_GRP”          char(1),

o   “OWNER_ACCT_ID”         numeric,

o   “OWNER_SRVC_LOC_ID”     numeric,

o   “PAYER_REF”             char(30),

o   “PAYER_KURT_ID”         int,

o   “CREATORS_REFERENCE”    char(25),

o   “UPDT_LAST_MOD_TS”      datetime

o   )

o   primary key

o   (

o   “RESR_ELMT_ID”,

o   “RECORD_EFF_END_DT”,

o   “RECORD_EFF_END_TM”

o   )

o   searchable columns (RECORD_EFF_END_DT)

o   go

o

o

o   define subscription DB2_ASN2_RESOURCE_NB_s2

o   for DB2_ASN2_RESOURCE_NB_d2

o   with replicate at origo_test.deFaktoReplica

o   where RECORD_EFF_END_DT = ’31 dec 9999′

o   go

–          Activate the subscription with:

o   activate subscription DB2_ASN2_RESOURCE_NB_s2

o   for DB2_ASN2_RESOURCE_NB_d2

o   with replicate at origo_test.deFaktoReplica

o   go

–          Validate subscription with:

o   validate subscription DB2_ASN2_RESOURCE_NB_s2

o   for DB2_ASN2_RESOURCE_NB_d2

o   with replicate at origo_test.deFaktoReplica

o   go

–          Resume replication and ask DB2 admin to start the Repagent on their side.

Installing RepServer 15

Installing RepServer 15



–          Untar the repserver installation files into a seperate directory and install the binaries etc by running setup –console. Answer no to email alerts and don’t enter license information.


–          If you are migrating from an earlier version of Repserver and you have already migrated the ASE server then you need to make sure that the rep server logins are not expired, (e.g REP_dhtsolo_RSSD_prim and REP_dhtsolo_RSSD_maint). If they are expired, just try to login to the ase with them, you will need to change the passwords twice, first to a temp password and then back to the original.


–          If you are using Norwegian language then you will also need to add the nocase_iso_1_nor.srt file into $SYBASE/charsets/iso_1 and tweak the $SYBASE/config/objectid.dat file by adding the following or similar line at the bottom of the collate section:

o           1.3.6.1.4.1.897.4.9.3.148 = nocase_iso_1_nor


Find the final number 148 by doing an sp_helpsort in the ase server and looking for the number associated with nocase_iso_1_nor


–          Follow the regular steps for installing a repserver, i.e rs_init which should be fairly straight forward.


When configuring the new rep server set the sort order to nocase_iso_1_nor

Installing a Sybase ASE15 Server

The following points apply for a new installation or an upgrade to an existing server. With an upgrade just create the new filesystem side by side the existing installation and migrate over the logins, config settings, user databases etc. In an upgrade the ideal is if you have enough space to run the 2 servers in parallel but if not then at least create the bare bones Sybase server (system databases) and then drop a user database on the existing sybase server, along with devices, and recreate them for the new server. It is important that you create environment variable for the new server which do not include any reference to the old server so you will need to hack the .cshrc and SYBASE.csh a little bit so that when you source the new .cshrc it loads the environment variables from scratch. Refer to the FPS Sybase server .cshrc file for examples. Also it is important that you give the new server a different port number from the old server. 1)      You first need to make sure that the Unix machine is ready and has all the required patches installed etc. 2)      The next step is to get the filesystem (5GB) and various raw devices created by Unix. You will need as an example at least the following raw devices in place before installing the Sybase server; /dev/vg_kapaks/rlv_master  (Mbytes) 224 /dev/vg_kapaks/rlv_sybsystemdev  (Mbytes) 128 /dev/vg_kapaks/rlv_sysprocsdev  (Mbytes) 320 /dev/vg_kapaks/rlv_temp01  (Mbytes) 2016 If you are doing an upgrade and space is limited then you could make the tempdb device smaller and then create a larger one later on. 3)      Next you need to install the actual server and for this you take the cd/dvd/tarball from Sybase and place it into $SYBASE/software. $SYBASE being the directory where you have decided to install Sybase, which must be different form any existing SYBASE installation. Unzip and extraxt the files. 4)      After you have untarred it you first set the SYBASe variable to where you want to install the server, e.g setenv SYBASE /progs/kapaks/sybase_15 and then run the setup by first starting the X server on your pc and then typing setup. You can also run it without a gui by typing setup –console. During the install you will be asked various questions including information about the license, just continue without license for now, we will put in the license later as we get a 30 day trial anyway. Choose Full installation, Enterprise edition, Server license and when asked say continue installation without license. Also answer no to configure email alerts, this can be done later if needed. Once all the software has been unloaded answer NO to remember ASE plugin password then you will be asked if you want to build the servers, go through the various fields filling in the relevant information and click on build. I would suggest initially building just the dataserver, backupserver and XP server. Other servers can be built later if required. Choose to custom configure all the servers. If this is a migration from an existing server remember to use the same port numbers as before and also to name the servers the same. Use the following answers, –          Answer Mixed to application type. –          2k to page size, VERY IMPORTANNT if this is a migration otherwise maybe choose 4k               for a completely new sybase server installation. –          Answer NO to enable PCI question. –          NO to optimise ASE configuration. –          Other values are fairly obvious hopefully. 5)      Hopefully this all goes fine and you now have a working Sybase server. You need to check whether the Sybase server needs to be localised to the Norwegian language, it almost certainly has to be. If you are migrating a server then you can easily check this in the old servers log file by looking for the line “SQL Server’s default sort order is”, if it says nocase_iso_1_nor then you need to install it for the new server. If it does then the next step is now to localise it to the Norwegian language and this is done by the following steps, if not you can jump over this step: –          First source the SYBASE.csh file by doing source ~/SYBASE.csh –          Copy the $SYBASE/charsets/iso_1/nocase_iso_1_nor.srt file over from another existing server and place it in the same folder on the new server i.e into  $SYBASE/charsets/iso_1/ folder and make sure it is readable by all. –          Start $SYBASE/ASE-15_0/bin/sqlloc (having first started exceeed on your pc) –          Select your Sybase server and enter the sa password, and then change the sort order to “Dictionary order, case insensitive, modified for Norwegian” and if necessary the default character set to iso_1 and click OK a few times. –          If the sqlloc application fails you can use sqllocres –r <resource file> (sqlloc.rs). The resource file can be found in $SYBASE/ASE-15_0/init/sample_resource_files. Just change the template resource file by putting in the Sybase server name, sa password and putting in sqlsrv.default_characterset: iso_1 and sqlsrv.sort_order: nocase_iso_1_nor 6)      The above step should be enough but some servers have an extra language module installed, for no apparently good reason. If you are migrating from an old server then you can check for this in the master..syslanguages table. If there is a row called Norwegian then you can install this on the new server by running: sp_addlanguage norwegian, norsk,                                                                                                                         ‘January,February,March,April,May,June,July,August,September,October, November, December’, ‘Jan,Feb,Mar,Apr,May,Jun,Jul,Aug,Sep,Oct,Nov,Dec’, ‘Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday’, mdy,  7 7)      The next step is to extend the temporary database onto it’s own device 1GB is a good starting point, and you could also consider creating a few extra temporary databases. This is not necessary if you are installing Sybase ASE15.0.3 which asks for a special temporary device at install. You should at the least consider creating a special temporary database to be used by logins with the sa_role. 8)      You should probably put in a sa login password at this point for added security, you do this by logging into the server and typing sp_password NULL, <new password>. After this is done place the password into the $SYBASE/$SYBASE_ASE/install/$DSQUERY file. 9)      You now need to adjust various server parameters like memory, number of open objects etc, take these from an existing Sybase server and adjust as needed. The easiest way to do this is to look at the existing servers .cfg file and do something like a cat DHT_KAPAKS.cfg | grep -v DEFAULT to find the non default values. Ignore the sections for monitoring and security related, these will be set when installed later on. Also ignore the bit about buffer pools, you create this after the Sybase server is rebooted with the new parameters so do this now, just set the buffer pools to the same values on the server being migrated from. Also set the parameter number of open partitions to the sum of number of open objects and number of open indexes. 10)   At this point you can migrate over logins if you are doing a migration from an existing server. This will wary a bit but for example if you are upgrading from Sybase version 12 to 15 then you would do the following steps, for other ASE versions you might need to change the temp table a bit: –Create the temporary syslogins table on the new server sp_configure ‘allow updates to system tables’,1 go USE tempdb go –drop table dbo.temp_syslogins –go CREATE TABLE dbo.temp_syslogins ( suid        int      NOT NULL, status      smallint      NOT NULL, accdate     datetime      NOT NULL, totcpu      int           NOT NULL, totio       int           NOT NULL, spacelimit  int           NOT NULL, timelimit   int           NOT NULL, resultlimit int           NOT NULL, dbname      varchar(30)   NULL, name        varchar(30)   NOT NULL, password    varbinary(30) NULL, language    varchar(30)   NULL, pwdate      datetime      NULL, audflags    int           NULL, fullname    varchar(30)   NULL, srvname     varchar(30)   NULL, logincount  smallint      NULL, procid           int                NULL ) LOCK ALLPAGES Go n  You may need to tweak the column types depending on which version of Sybase you are importing from, check ther source sysloginroles table to double check. — Now bcp in the logins which was bcp’d out on the old Sybase server — bcp tempdb..temp_syslogins in tore_syslogins.out -Usa -P -SMICOS2 -n — Alter the table to add the new columns alter table tempdb..temp_syslogins add lastlogindate datetime NULL add crdate datetime NULL add locksuid int NULL add lockreason int NULL add lockdate datetime NULL go — Delete the sa and probe logins from the the temp_syslogins table use tempdb go delete from tempdb..temp_syslogins where name in (“sa”, “probe”) go — Delete existing logins which match by name delete from tempdb..temp_syslogins where name in ( select t.name from tempdb..temp_syslogins t, master..syslogins s where t.name =s.name ) go — Increase the suid’s. ONLY necessary if you are merging multiple sybase servers into one. update tempdb..temp_syslogins set suid = suid + 6150 go — Now copy the syslogins over to the master..syslogins table insert into master..syslogins select * from tempdb..temp_syslogins go — Create the sysloginroles table USE tempdb go CREATE TABLE dbo.temp_sysloginroles ( suid   smallint NOT NULL, srid   smallint NOT NULL, status smallint NOT NULL ) LOCK DATAROWS WITH EXP_ROW_SIZE=1 ON system Go n  You may need to tweak the column types depending on which version of Sybase you are importing from, check ther source sysloginroles table to double check. — BCP in the data — bcp tempdb..temp_sysloginroles in c1p16_sysloginroles.out -Usa -P -SMICOS2 -n — remove the roles for sa and probe logins delete from tempdb..temp_sysloginroles where suid <=2 go — Alter the table to make it compatible with ASE15 alter table tempdb..temp_sysloginroles modify suid int not NULL modify srid int not NULL go — Increase the suid’s ONLY necessary if you are merging multiple sybase servers into one. update tempdb..temp_sysloginroles set suid = suid + 6150 go — Copy the roles into master.sysloginroles insert into master..sysloginroles select * from tempdb..temp_sysloginroles go — The next steps relate to synchronising the suids after you have loaded the old              –database into the new server . –First on the original server check whether there are any aliases set up or whether        –there are any users who have a different name to their syslogin name with the             –following on the original Sybase database. select * from sysalternates go select l.suid,u.suid,l.name, u.name from master..syslogins l, sysusers u where l.suid = u.suid and l.name != u.name go –The following will resync the suids in a user database with the suid’s in syslogins. update sysusers set u.suid = l.suid from sysusers u, master..syslogins l where l.name=u.name 11)   You now need to set up the environment variables in .profile and .cshrc and also create the servername file containing the sa password, if you haven’t already done so, in the $SYBASE/ASE-15_0/install/ directory e.g for a Sybase server called MICOS1 this file would be called MICOS1 and would only contain the sa password, this is needed for the scripts to work. You can copy the environment variables from an existing Sybase server installation and just modify the values to your needs. 12)   You should now take a look at the $SYBASE installation folder and make sure that the interfaces, .cshrc , SYBASE.sh and SYBASE.csh files are all world readable. 13)   If this is a brand new machine then the next step is to generate the required licenses (always choose Un-served License). If this is just one server on an unclustered server, for example a test server, then that is fairly easy and the license file can be generated, (from https://sybase.subscribenet.com/control/sybs/login?nextURL=%2Fcontrol%2Fsybs%2Findex. If it is a clustered server then you need to create a multimode license file for each machine in the cluster and place it underneath the SYSAM folder. You create a multimode license file by answering “2” to the question of “Number of machines to license”. You then enter both the machines it can run on and generate and download the license file. The last step, for clustered and non-clustered Sybase servers is to place the licenses file under $SYBASE/$SYBASE_SYSAM2/licenses and if you want update the value for LM_LICENSE_FILE in the .profile and/or .cshrc file to point to it although this should not be necessary. 14)   Next we need to install the various standards that you migh thave, first you need to create the scripts directory (sc) etc, so just copy this from an existing server installation and place it in an appropriate place, best thing is to use a tar ball for this and just get rid of the unnecessary files afterwards like logs etc. 15)   Next we create the additional Sybase devices from the raw devices on Unix. 16)   The next step is to install several specific stored procedures etc: –          First create a new database called syb_drift with 20MB data and 5MB log CREATE DATABASE syb_drift ON data01=’20M’ LOG ON logg01=’5M’ go USE master go EXEC sp_dboption ‘syb_drift’,’trunc log on chkpt’,true go –          Type: cdhs cd sc inst (choose option 1 to install sybdrift, enter the sa password but answer no to                        “Vil du                 installere crontab” and “Vil du teste konsistens-sjekk, backup og                              overvaaking?”). –          Next you need to install the relevant sp_thresholdaction stored procedure and this depends on whether or not you are using SQL Backtrack to backup the databases. If you are using SQL Backtrack then choose the following stored proc: USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction >>>’ END go USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction_old IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction_old >>>’ END go create procedure sp_thresholdaction_old @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left end go EXEC sp_procxmode ‘dbo.sp_thresholdaction_old’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction_old >>>’ go create procedure sp_thresholdaction @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int, @cmd1 varchar(5000), @cmd2 varchar(5000) set @cmd1 = ‘$DT_SBACKTRACK_HOME/bin/dtsbackup ${DTPHYSICAL}/’ + @dbname + ‘ -log_only | tee $BACKUPKAT/ch/backtrack_logg’ set @cmd2 = ‘$BACKUPKAT/sc/m_sback_log ‘ + @dbname –if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left exec xp_cmdshell @cmd1 exec xp_cmdshell @cmd2 exec xp_cmdshell @cmd1 end go EXEC sp_procxmode ‘dbo.sp_thresholdaction’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction >>>’ go If however you are not using SQL Backtrack install the following version of sp_thresholdaction: USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction >>>’ END go USE sybsystemprocs go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_thresholdaction_old IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< FAILED DROPPING PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< DROPPED PROCEDURE dbo.sp_thresholdaction_old >>>’ END go create procedure sp_thresholdaction_old @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left end go EXEC sp_procxmode ‘dbo.sp_thresholdaction_old’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction_old’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction_old >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction_old >>>’ go create procedure sp_thresholdaction @dbname         varchar(30), @segmentname    varchar(30), @space_left     int, @status         int as declare @devname varchar(100), @before_size int, @after_size int, @before_time datetime, @after_time datetime, @error int, @cmd1 varchar(255) select @cmd1 = ‘$BACKUPKAT/sc/dump_t ‘ + @dbname –if @segmentname != (select name from syssegments where segment = 2) begin print “THRESHOLD WARNING: database ‘%1!’, segment ‘%2!’ at ‘%3!’ pages”, @dbname, @segmentname, @space_left exec xp_cmdshell @cmd1 end go EXEC sp_procxmode ‘dbo.sp_thresholdaction’, ‘unchained’ go IF OBJECT_ID(‘dbo.sp_thresholdaction’) IS NOT NULL PRINT ‘<<< CREATED PROCEDURE dbo.sp_thresholdaction >>>’ ELSE PRINT ‘<<< FAILED CREATING PROCEDURE dbo.sp_thresholdaction >>>’ go 17)   The next step is to configure the new mda montables and that is done as follows: sp_configure “enable monitoring”,1 go sp_configure “SQL batch capture”,1 go sp_configure “max SQL text monitored”,100000 go sp_configure “sql text pipe active”,1 go sp_configure “sql text pipe max messages”,10000 go sp_configure “object lockwait timing”,1 go sp_configure “per object statistics active”,1 go sp_configure “statement cache size”, 10000 go sp_configure “enable stmt cache monitoring”,1 go sp_configure “deadlock pipe max messages”, 10000 go sp_configure “deadlock pipe active”, 1 go sp_configure “errorlog pipe active”,1 go sp_configure “errorlog pipe max messages”,10000 go sp_configure “wait event timing”,1 go sp_configure “statement statistics active”,1 go sp_configure “process wait events”, 1 go sp_configure “plan text pipe active”, 1 go sp_configure “plan text pipe max messages”, 10000 go sp_configure “statement pipe max messages”, 10000 go sp_configure “statement pipe active”,1 go 18)   Install any extra stored procedures, these can be found from an existing Sybase server. E.g sp__mda_hot_tables, sp_mda_io etc. 19)   At this stage you can create (for load) and start loading the user databases, using sql backtrack remote load procedure, for example from existing sybase server run; dtsrecover /home/solo1/datatools/sbackups.physical/dhp_SOLP/commissiondb -server TORIGODB -database commissiondb –user sa –password <pass> -copyover Remember to perform  the suid resync section above. If you haven’t done so previously  you need to first create all the required sybase data devices from the raw partitions which should already be on the Unix machine. 20)   Install dbccdb –          Run sp_plan_dbccdb to find the recommended size of the dbccdb and create it with for example: USE master go CREATE DATABASE dbccdb ON data11=’1500M’ LOG ON log01=’500M’ go USE master go EXEC sp_dboption ‘dbccdb’,’trunc log on chkpt’,true go USE dbccdb go CHECKPOINT Go –          Run the scripts/installdbccdb script in Unix using isql. –          Run the following sql with the value being the max number for processes displayed by sp_plan_dbccdb: sp_configure “number of worker processes”, 6 go –          If you haven’t already done so earlier create a 150MB 16k memory pool in the cache used by dbccdb, usually just default data cache. EXEC sp_poolconfig ‘default data cache’,’150M’,’16K’ go EXEC sp_poolconfig ‘default data cache’,’16K’,’wash=30M’ Go –          Create the workspaces from the maximum recommended values as follows: Use dbccdb Go sp_dbcc_createws dbccdb, ‘default’, scan_dbccdb, scan , ‘750M’ go sp_dbcc_createws dbccdb, ‘default’, text_dbccdb, text , ‘200M’ go –          Configure dbccdb for each of the user databases as follows, just use the max values for all the db’s: sp_dbcc_updateconfig tlfbank, ‘max worker processes’, ’12’ go sp_dbcc_updateconfig tlfbank, ‘dbcc named cache’, ‘default data cache’,                                          ‘200M’ go sp_dbcc_updateconfig tlfbank, ‘scan workspace’, scan_dbccdb go sp_dbcc_updateconfig tlfbank, ‘text workspace’, text_dbccdb go 21)   The next thing to install is auditing and unique sa logins and the details for this can be found in the document Auditing and Unique Logins. 22)   It is a good idea to create a second tempdb to reduce the load on the tempdb and that is done as follows: create temporary database tempdb2 on tempdb_dev2=’1024M’ log on tempdb_log=’524M’ go sp_tempdb ‘add’, ‘tempdb2’, ‘default’ go sp_tempdb ‘show’ go 23)   You will also need to install SQL Backtrack if that is required. 24)   When you come to load the actual databases at migration or upgrade time make sure that you run update index stats against all the user tables AND all the system tables except syslogs and sysgams otherwise SQL Backtrack will run very slowly. 25)   The final step is to set up all the cron jobs to run, here just look at what is currently configured on the existing or similar servers. In some cases (check with Unix sa) you need to create a copy of the crontab in the $HOME directory for clustering etc. You do this with a command similar to; crontab -l > $HOME/crontab.sybgunda where sybgunda is the username. 26) Another thing you may find is that SQL Backtrack stops working if you have changed the ip address of the machine, this might be because all the TSM (OBSI settings) have changed. To fix this you need to find out what the new settings are in the dsm.sys file and then probably create a new adsm dump pool.

How to drop a database when drop database fails

How to Drop a Database When drop database Fails

Follow the steps in this section to drop a database when drop database fails. Do not use these steps unless directed to do so by this book, or unless there is no critical data in the database.

1.Log in as the “sa”.

2.Check to make sure the database has been marked “suspect.” The following query produces a list of all databases which are marked suspect:

1> select name from master..sysdatabases

2> where status &256 = 256

3> go

3.If the database is marked “suspect”, go to step 4. If it is not marked “suspect”, mark it in one of the following ways:

a.Execute the sp_marksuspect stored procedure discussed under “How to Mark a Database “suspect””, and restart Adaptive Server to initialize the change.

b.Use the procedure below:

1> sp_configure”allow updates”, 1

2> go

1> use master

2> go

1> begin transaction

2> update sysdatabasesset status = 256

3> where name = “database_name”

4> go

Verify that only one row was affected and commit the transaction:

1> commit transaction

2> go

Reset the allow updates option of sp_configure:

1> sp_configure “allow updates”, 0

2>go

Restart Adaptive Server to initialize the change.

4.Remove the database:

1> dbccdbrepair(database_name,dropdb)

2> go

dbcc dbrepair sometimes displays an error message even though it successfully drops the database. If an error message occurs, verify that the database is gone by executing the use database_name command. This command should fail with a  911 error, since you dropped the database. If you find any other error, contact Sybase Technical Support.

How to move the master database to a new device

This error occurs when you try to extend the master database onto a device other than the master device.

It is recommended that you keep user objects out of the master database. If you keep user databases off the master device, you allow space in case the master database needs to grow. In addition, if you ever need to rebuild the master device, it will be easier if it does not contain user databases.

Adaptive Server users can move any “home-grown” system procedures that start with “sp_” to sybsystemprocs (by dropping them from the master database and creating them in sybsystemprocs).

Extend the master database only if absolutely necessary! If you are sure you must increase the master database size and have no room on the current master device, use the following procedure to remove user databases from the master device.

Move User Databases

  • Dump the user databases with the dump database command.
  • Rename the dumped databases on the master device with sp_renamedb.
  • Re-create the databases with their original names on another device with create database. Be sure they are created exactly like the old databases, to avoid 2558 and other errors. Refer to Error 2558 for more information.
  • Load the dumps with load database.
  • Use the online database command for each database to make the databases available for use.
  • Check the databases in their new location to make sure the load was successful (that is, perform a simple query with isql), and if everything loaded successfully, drop the old databases from the master device.

You can now try to increase the size of the master database on the master device with the alter database command.

Increase Master Device Size

If the master device contains only the master database and the master device is too small, then use the following procedure:

Warning!

Altering the master device is extremely risky! Avoid it if at all possible. Be familiar with the recovery methods in “System Database Recovery” in case you lose your master database or master device.

  • Back up the master database with the dump database command.
  • Save the contents of key system tables such as sysdatabases, sysdevices, sysusages, and syslogins.  Make a note of these values.  Also make a note of the path to the dump device in sysdevices.
  • Use the buildmaster utility to build a new master device with enough extra space so that you will never need to increase the master device again. When buildmaster completes, a new master database will exist on the new master device. The buildmaster executable is found in bin, so use ./buildmaster and follow the prompts.
  • You now need to create a new runserver file which points to this new master device (the -d option). And start up the server with this new runserver file.
  • Expand the size of the new master database with the alter database command, if necessary, so that it matches the size of the dumped master database(get this info from the original sysusages table where the size is in 2k blocks, the alter database command uses sizes in MB).
  • Execute the following command in isql:

1> select name, high from master..sysdevices

2> where name = “master”

3> go

and note the “high” value for the master device. Shutdown the server.

  • Add the –m option to the runserver file to start Adaptive Server in single-user mode.
  • Allow updates to the system catalog:

1> sp_configure “allow updates”, 1

2> go

  • Change the value for srvnetname in sysservers from SYB_BACKUP to the name of your backup server.
  • Load the dump of the master database, using load database master from <full path name>.
  • Reset the “high” value in master..sysdevices:

1> begin transaction

2> go

1> update master..sysdevices

2> set high = <value of high from step 5>

3> where name = “master”

4> go

  • If the previous update affected only one row, commit the transaction.
  • Restart Adaptive Server.
  • Turn off allow updates:

1> sp_configure, “allow updates”, 0

2> go

  • Edit the new runserver file to take it out of single user mode, i.e remove the –m option and restart the server, if this all works fine (leave it for a while) then you can remove the original master device and its related run server file.

How to perform a load froma remote backup server

How to perform a load from a remote backup server


A step by step guide:


In this guide there are assumed to be two servers TROPHY_1103 and TRIDENT_1103.


1.                  Create a backup server for TROPHY_1103 called TROPHY_1103_BACK

2.                  Create a backup server for TRIDENT_1103 called TRIDENT_1103_BACK

(For info on creating backup servers refer to reference manuals)

3.                  Log into TROPHY_1103 and execute the following two commands:                          sp_addserver SYB_BACKUP, TROPHY_1103_BACK

sp_addserver TRIDENT_1103_BACK

(If the SYB_BACKUP part doesn’t work do a sp_dropserver SYB_BACKUP     first)

4.                  Log into TRIDENT_1103 and execute the following two commands:

sp_addserver SYB_BACKUP, TRIDENT_1103_BACK

sp_addserver TROPHY_1103_BACK

5.                  Take a look at the interfaces file for TROPHY_1103 and make a note of the entry for TROPHY_1103_BACK.  This info needs to be entered into the Runserver file for TRIDENT_1103 except the line which starts master.  The entry for TRIDENT_1103_BACK in the TRIDENT_1103 Runserver file needs to be entered into the TROPHY_1103 Runserver file again taking out the master line, leaving the query line.

6.                  Next make sure that both servers TROPHY_1103 and TRIDENT_1103 are set up for remote procedure calls by checking that sp_configure “allow remote access” has a run value set to 1, if not issue the following command sp_configure “allow remote access”, 1

7.                  Test that the two servers can communicate by performing the following procedure calls,                                                                                       from TROPHY_1103; TRIDENT_1103_BACK…sp_who

And from TRIDENT_1103; TROPHY_1103_BACK…sp_who

8.         The two servers are now set up to allow remote backups, this can be issued             from either server (in this example I’m  performing it from TROPHY_1103)    with the following command:

Load database from “<give the full pathname to the tape or disk device used         by TRIDENT_1103 ASE>” at TRIDENT_1103_BACK.

How to drop a corrupt table


1. sp_configure “allow updates”, 1

go


or


reconfigure with override ( if System X)

go


2. Use the database; get its dbid [select db_id()] and write it

down for reference.


3. select id from sysobjects where name = <bad-table-name>

go


… write that down, too.


4. select indid from sysindexes where id = <table-id>

go


… you will need these index IDs to run dbcc extentzap. Also,

remember that if the table has a clustered index you will need

to run extentzap on index “0”, even though there is no sysindexes

entry for that indid.


5. begin transaction

go


… not required, but a *real*good*idea*.


6. Type in this short script:


declare @obj int

select @obj = id from sysobjects where name = <bad-table-name>

delete syscolumns where id = @obj

delete sysindexes where id = @obj

delete sysobjects where id = @obj

delete sysprocedures where id in

(select id from sysdepends where depid = @obj)

delete sysdepends where depid = @obj

delete syskeys where id = @obj

delete syskeys where depid = @obj

delete sysprotects where id = @obj

delete sysconstraints where tableid = @obj

delete sysreferences where tableid = @obj


…This gets rid of all system catalog information for the

object,

including any object and procedure dependencies that may be

present.

Some of these lines may be unnecessary; you should type them in

anyway just for the exercise


7. commit transaction

go


(unless you made a mistake in step 6, in which case rollback.)


8. Prepare to run dbcc extentzap:


use master

go

sp_dboption <db-name>, “read”, true

go

use <db-name>

go

checkpoint

go


(Each of the above must be given as a separate batch — that is,

type “go” after every line.)


sp_role “grant”, sybase_ts_role, “sa”

go

set role “sybase_ts_role” on

go


9. Run dbcc extentzap once for EACH index (including index 0, the

data

level) that you got from step 4 above:


**********

The following commands are very dangerous commands

use them with care because, if you give the wrong object id,

all data for that object will be lost forever. You want to

make sure that the object id is the id of the bad table and

not one of your good objects

**********


dbcc traceon(3604)

go


/* lets you see errors */


dbcc extentzap( <db-id>, <object-id>, <index-id>, 0)

go

dbcc extentzap( <db-id>, <object-id>, <index-id>, 1)

go


Notice that extentzap runs TWICE for each index … this is

because

the last parameter (the “sort” bit) might be 0 or 1 for each

index,

and you want to be absolutely sure you clean them all out.


10. Clean up after yourself:


use master

go

sp_dboption <db-name>, “read”, false

go

sp_configure allow,0

go

reconfigure ( if System X)

go

use <db-name>

go

checkpoint

go

Database status values in sysdatabases

Status control bits in the sysdatabases table


Decimal                       Hex                             Status

4                                  0x04                            select into/bulkcopy; can be set by user

8                                  0x08                            trunc log on chkpt; can be set by user

16                                0x10                            no chkpt on recovery; can be set by user

32                                0x20                            Database created with for load option, or crashed while loading database, instructs recovery not to proceed

256                              0x100                          Database suspect; not recovered; cannot be opened or used; can be dropped only with dbcc dbrepair

512                              0x200                          ddl in tran; can be set by user

1024                            0x400                          read only; can be set by user

2048                            0x800                          dbo use only; can be set by user

4096                            0x1000                        single user; can be set by user

8192                            0x2000                        allow nulls by default; can be set by user



There is also an undocumented value which is 320, this is very similar to 256 i.e. database suspect, but it allows you to perform certain functions on the db.

Using set showplan

This section explains how to use and interpret the showplan command to better understand and utilize the SQL Server query optimizer.

When you send a SQL statement to the Sybase SQL Server, the request first goes to a cost-based query optimizer whose job it is to find the most efficient data access path to fulfill the request. To do this, the optimizer examines such information as:

  • The structure of any indices defined on the table(s) involved

  • The distribution of data within the indices

  • The number of rows in the table(s) involved

  • The number of data pages used by the table(s) involved

  • The amount of data cache SQL Server is currently using

  • The access path that would require the least amount of I/O and, therefore, would be the fastest

Once an optimal access path is calculated, it is stored in a query plan within the procedure cache.

The showplan allows you to view the plan the optimizer has chosen and to follow each step that the optimizer took when joining tables, reading from an index or using one of several other methods to determine cost efficiency. To invoke the showplan command, enter:

1> set showplan on

2> go

This command causes SQL Server to display query plan information for every SQL statement executed within the scope of the SQL Server session.

Since the determination of a query plan is performed independently from the actual data retrieval or modification, it is possible to examine and tune query plans without actually executing the SQL statement. This can be accomplished by instructing SQL Server not to execute any SQL statements via the following command:

1> set noexec on

2> go



Note
Issue noexec after showplan or the set showplan command will not execute.



For more information about executing the showplan command, refer to the SQL Server Performance and Tuning Guide.



Note
The showplan command does not function within stored procedures or triggers. However, if you set it to on and then execute a stored procedure or a command that fires a trigger, you can see the procedure or trigger output.



Use the following examples to analyze a query plan. In all cases, examples use the pubs database provided with each SQL Server release.

Interpreting showplan Output

The output of the showplan command consists of many different types of statements, depending on the details of the access path that is being used. The following sections describe some of the more common statements.

STEP n

This statement is added to the showplan output for every query, where n is an integer, beginning with 1. For some queries, SQL Server cannot effectively retrieve the results in a single step, and must break the query plan into several steps. For example, if a query includes a group by clause, the query needs to be broken into at least two steps: one to select the qualifying rows from the table and another to group them.

The following query demonstrates a single-step query and its showplan output:

1> select au_lname, au_fname from authors

2> where city = “Oakland”

3> go

STEP 1

The type of query is SELECT

FROM TABLE

authors

Nested iteration

Table Scan

A multiple-step example is shown in the next section.

The Type of Query Is SELECT (into a Worktable)

This showplan statement indicates that SQL Server needs to insert some of the query results into an intermediate worktable and, later in the query processing, select the values from that table. This is most often seen with a query which involves a group by clause, as the results are first put into a worktable, and then the qualifying rows in the worktable are grouped based on the given column in the group by clause.

The following query returns a list of cities and indicates the number of authors who live in each city. The query plan is composed of two steps: the first step selects the rows into a worktable, and the second step retrieves the grouped rows from the worktable.

1> select city, total_authors = count (*)

2> from authors group by city

3> go

STEP 1
The type of query is SELECT (into a worktable)
GROUP BY
Vector Aggregate
FROM TABLE
authors
Nested iteration
Table Scan
TO TABLE
Worktable

STEP 2
The type of query is SELECT
FROM TABLE
Worktable
Nested iteration
Table Scan

The Type of Query Is query_type

This statement describes the type of query for each step. For most user queries, the value for query_ type is select, insert, update, or delete. If showplan is turned on while other commands are issued, the
query_ type reflects the command that was issued. The following two examples show output for different queries or commands:

1> create table Mytab (col1 int)

2> go

STEP 1

The type of query is CREATE TABLE

1> insert publishers

2> values (“9904”, “NewPubs”, “Nome”, “AL”)

3> go

STEP 1

The type of query is INSERT

The update mode is direct

Table Scan

TO TABLE

publishers

The Update Mode Is Deferred

There are two methods or, modes, that SQL Server can use to perform update operations such as insert, delete, update, and select into. These methods are called deferred update and direct update. When the deferred method is used, the changes are applied to all rows of the table by making log records in the transaction log to reflect the old and new value of the column(s) being modified (in the case of update operations), or the values that will be inserted or deleted (in the case of insert and delete).

When all log records have been constructed, the changes are applied to the data pages. This method generates more log records than a direct update, but it has the advantage of allowing commands to execute which may cascade changes throughout a table. For example, consider a table that has a column col1 with a unique index on it and data values numbered consecutively from 1 to 100 in that column. Execute an update statement to increase the value in each row by one:

1> update Mytable set col1 = col1 + 1

2> go

STEP 1

The type of query is UPDATE

The update mode is deferred

FROM TABLE

Mytable

Nested iteration

Table scan

TO TABLE

Mytable

Consider the consequences of starting at the first row in the table, and updating each row until the end of the table. This violates the unique index. First, updating the first row (which has an initial value of 1) to 2 would cause an error, since 2 already exists in the table. Second, by updating the second row or any row in the table except the last one does the same.

Deferred updates avoid unique index violations. The log records are created to show the new values for each row, the existing rows are deleted and new values are inserted. In the following example, the table authors has no clustered index or unique index:

1> insert authors select * from authors

2> go

STEP 1

The type of query is INSERT

The update mode is deferred

FROM TABLE

authors

Nested iteration

Table Scan

TO TABLE

authors

Because the table does not have a clustered index, new rows are added at the end of the table. The query processor distinguishes between existing rows now in the table (before the insert command) from the rows to be inserted, thus avoiding the continuous loop of selecting a row, inserting it at the end of the table, re-selecting the row just inserted and reinserting it. The deferred insertion method first creates the log records to show all currently existing values in the table. Then SQL Server rereads those log records to insert the rows into the table.

The Update Mode Is Direct

Whenever possible, SQL Server tries to directly apply updates to tables, since this is faster and creates fewer log records than the deferred method. Depending on the type of command, one or more criteria must be met in order for SQL Server to perform the update using the direct method. The criteria are as follows:

  • insert ­ Using the direct method, the table into which the rows are being inserted cannot be a table which is being read from in the same command. The second query example in the previous section demonstrates this, where the rows are being inserted into the same table in which they are being selected from. In addition, if rows are being inserted into the target table, and one or more of the target table’s columns appear in the where clause of the query, then the deferred method, rather than the direct method, will be used.

  • select into ­ When a table is being populated with data by means of a select into command, the direct method will always be used to insert the new rows.

  • delete ­ For the direct update method to be used for delete, the query optimizer must be able to determine that either zero or one row qualifies for the delete. The only way to verify this is to check that one unique index exists on the table, which is qualified in the where clause of the delete command, and the target table is not joined with any other table(s).

  • update ­ For the direct update method to be used for update commands, the same criteria apply as for delete: a unique index must exist so that the query optimizer can determine that no more than one row qualifies for the update, and the only table in the update command is the target table to update. Also, all updated columns must be fixed-length datatypes, not variable- length datatypes. Note that any column that allows null values is internally stored by SQL Server as a variable-length datatype column.

1> delete from authors

2> where au_id = “172-32-1176”

3> go

STEP 1

The type of query is DELETE

The update mode is direct

FROM TABLE

authors

Nested iteration

Using Clustered Index

TO TABLE

authors

1> update titles set type = ‘popular_comp’

2> where title_id = “BU2075”

3> go

STEP 1

The type of query is UPDATE

The update mode is direct

FROM TABLE

titles

Nested iteration

Using Clustered Index

TO TABLE

titles

1> update titles set price = $5.99

2> where title_id = “BU2075”

3> go

STEP 1

The type of query is UPDATE

The update mode is deferred

FROM TABLE

titles

Nested iteration

Using Clustered Index

TO TABLE

titles

Note that the only difference between the second and third example queries is the column of the table which is updated. In the second query, the direct update method is used, whereas in
the third query, the deferred method is used. This difference occurs because of the datatype of the column being updated: the titles.type column is defined as “char(12) NOT NULL” where the titles.price column is defined as “money NULL”. Since the titles.price column is not a fixed-length datatype, the direct method cannot be used.

GROUP BY

This statement appears in the showplan output for any query that contains a group by clause. Queries that contain a group by clause are always two-step queries: the first step selects the qualifying rows into a table and groups them; the second step returns the rows from the table as seen in the following example:

1> select type, avg (advance),

sum(ytd_sales)

2> from titles group by type

3> go

STEP 1

The type of query is SELECT (into a worktable)

GROUP BY

Vector Aggregate

FROM TABLE

titles

Nested iteration

Table Scan

TO TABLE

Worktable

STEP 2

The type of query is SELECT

FROM TABLE

Worktable

Nested iteration

Table Scan

Scalar Aggregate

Transact-SQL includes the aggregate functions avg, count, max, min, and sum. Whenever you use an aggregate function in a select statement that does not include a group by clause, the result is a single value, regardless of whether it operates on all table rows or on a subset of the rows defined in the where clause. When an aggregate function produces a single value, the function is called a scalar aggregate and showplan lists it that way as seen in the
following example:

1> select avg(advance), sum(ytd_sales) from

titles

2> where type = “business”

3> go

STEP 1

The type of query is SELECT

Scalar aggregate

FROM TABLE

titles

Nested iteration

Table scan

STEP 2

The type of query is SELECT

Table Scan

showplan considers this a two-step query, which is similar to the group by output. Since the query contains a scalar aggregate which will return a single value, SQL Server keeps a “variable” internally to store the result of the aggregate function. It can be thought of as a temporary storage space to keep a running total of the aggregate function as the qualifying rows from the table are evaluated. After all rows are evaluated from the table in step 1, the final value of the variable is selected in step 2 to return the scalar aggregate result.

Vector Aggregates

When a group by clause is used in a query that also includes an aggregate function, the aggregate function produces a value for each group. These values are called vector aggregates. The vector aggregate statement from showplan indicates that the query includes a vector aggregate. The following example query includes a vector aggregate:

1> select title_id, avg (qty) from sales

2> group by title_id

3> go

STEP 1

The type of query is SELECT (into a worktable)

GROUP BY

Vector Aggregate

FROM TABLE

sales

Nested iteration

Table Scan

TO TABLE

Worktable

STEP 2

The type of query is SELECT

FROM TABLE

worktable

Nested iteration

Table Scan

from table Statement

This showplan output shows the table from which the query reads. In most queries, the from table is followed by the table’s name. In other cases, it may show that it is selecting from a worktable. The significant fact is that the from table output show the query optimizer’s order for joining tables. The order in which the tables are listed is the order in which the tables are joined. This order often differs from the order in which tables are listed in the query’s from or where clauses. The reason for this is that the query
optimizer checks many join orders for the tables and picks the order that uses the fewest I/Os.

1> select authors.au_id, au_fname, au_lname

2> from authors, titleauthor, titles

3> where authors.au_id = titlesauthor.au_id

4> and titleauthor.title_id =

titles.title_id

5> and titles.type = “psychology”

6> go

STEP 1

The type of query is SELECT

FROM TABLE

TITLES

Nested iteration

Table Scan

FROM TABLE

TITLEAUTHOR

Nested iteration

Table Scan

FROM TABLE

authors

Nested iteration

Table Scan

This query illustrates the join order that the query optimizer chose for the tables, which is not the order listed in either the from or where clauses. By examining the order of the from table statements, it can be seen that the qualifying rows from the titles table are first located with the search clause titles.type = “psychology”. Those rows are then joined with the titleauthor table using the join clause titleauthor.title_id = titles.title_id. Finally, the
titleauthor table is joined with the authors table to retrieve the desired columns using the join clause
authors.au_id = titleauthor.au_id.

to table Statement

When you issue a command that tries to modify one or more table rows, such as insert, delete, update, or select into, the to table statement shows the target table that is being modified. If the operation requires an intermediate step and inserts the rows into a worktable, the to table statement names the worktable instead of the user table.

1> insert sales

2> values (“8042”, “QA973”, “7/15/94”, 7,

3> “Net 30”, “PC1035”)

4> go

STEP 1

The type of query is INSERT

The update mode is direct

TO TABLE

sales

1> update publishers

2> set city = “Los Angeles”

3> where pub_id = “1389”

4> go

STEP 1

The type of query is UPDATE

The update mode is deferred

FROM TABLE

publishers

Nested iteration

Using Clustered Index

TO TABLE

publishers

Note that the showplan for the second query indicates that the publishers table is used for both from table and to table. With update operations, the query optimizer must first read the table containing the row(s) to be updated, resulting in the from table statement, and then must modify the row(s), resulting in the to table statement.

Worktable

For some queries, such as those that require ordered or grouped output, the query optimizer creates its own temporary table called a worktable. The worktable holds all the intermediate results of the query where they are ordered and/or grouped, and then the final select is done. When all results are returned, the table is dropped automatically. The tempdb database holds all temporary tables so the System Administrator may need to increase the size of that database to accommodate very large worktables. For more information about worktables, refer to Chapter 8, “The tempdb Database.”

Since the query optimizer creates these worktables for its own internal use, the worktable names are not listed in the tempdb..sysobjects table.

Nested Iteration

The nested iteration is the default technique used to join table and return rows from a table. It indicates that the query optimizer uses one or more sets of loops to read a table and fetch a row, qualify the row based on the search criteria given in the where clause, return the row to the front end, and then loop again for the next row. The following example shows the query optimizer doing nested iterations through each of the tables in the join:

1> select title_id, tile

2> from titles, publishers

3> where titles.pub_id = publishers.pub_id

4> and publishers.pub_id = ‘1389’

5> go

STEP 1

The type of query is SELECT

FROM TABLE

publishers

Nested iteration

Using clustered index

FROM TABLE

titles

Nested iteration

Table Scan

Table Scan

This showplan statement identifies the method used to fetch the physical result rows from the given table. When the table scan method is used, execution begins with the first row on the table. Then each row is fetched and compared with the conditions set in the where clause, then returned as valid data if the conditions are met. No matter how many rows qualify, every row in the table must be checked, and this causes problems if the table is large (the scan has a high I/O overhead). If a table has one or more indexes on it, the query optimizer may still choose a table scan
instead of reading the index. The following query shows a typical table scan:

1> select au_lname, au_fname

2> from authors

3> go

STEP 1

The type of query is SELECT

FROM TABLE

authors

Nested iteration

Table Scan