Sybase has a relatively new backup called cumulative which backs up all the changes in the database since the last Full Backup/Dump and can be used in between Full Backups and Tran backups to give greater flexibility, especially on large databases. It is similar to the Differential Backup in SQL Server
To enable this backup for a database you must first enable the “allow incremental dumps” option in the database: e.g sp_dboption <dbname>, ‘allow incremental dumps’, true
You can then issue the cumulative backup command as long as there is an existing valid Full Dump backup: dump database <dbname> cumulative to “/tmp/tore1-cum.dmp” go
To load the cumulative dump the syntax is: load database <dbname> cumulative from “/tmp/tore1-cum.dmp” go
Backup / Restore order
In terms of using the cumulative backup you could for example take a weekly Full backup, then daily cumulative backups and hourly transaction log backups.
So in a recovery situation you would first recover the Full Backup, then the latest Cumulative Backup and finally the transaction log dumps up to the required point in time.
The Repserver interfaces file will need entries for both Primary and Replicate Dataservers and the Primary Dataserver’s interface file will need an entry for the Repserver.
Lets first set some things set up and make a note of various user accounts and passwords. Make a note of the various logins passwords you need; Dataserver sa login, Dataserver maint login (which needs to be created on both dataservers), Replication server sa login and Replication server Repagent login. Unless you have an account with the same name/password on all three servers (RepServer and two dataservers), e.g sa, you will also need to create a RepAdmin Login on the RepServer and two Dataservers giving it sa_role and replication_role on the Dataservers and sa role on the RepServer. This login MUST be used when defining the subscription later on. If it already exists drop the maint user from the primary and instead alias the maint user to dbo, e.g sp_addalias Database_maint, dbo
Create the connections to the source(pubs2) and replicant(pubs2_rep) databases using rs_init, marking pubs2 as being replicated.
After the connections are created you need to disable the firing of any triggers on the replicate side with: Alter connection to <replicate server>.<replicate db> Set ‘dsi_keep_triggers’ to ‘off’ go
Mark the whole Primary/Source database (e.g pubs2) for replication with Sp_reptostandby pubs2, ‘all’ go You can also just mark individual table(s) for replication with; use pubs go sp_setreptable table_name, true;
If you are setting the whole database to replicate (sp_reptostandby) then you also need to enable warm standby transactions which enables/allows replication of DDL statements. sp_config_rep_agent pubs2, send_warm_standby_xacts,true go
Make sure the repagent is up and running as that is needed for the dump marker to trigger below. This can be checked in the Primary Dataserver with sp_help_rep_agent and also on the RepServer with admin who. Another thing to check is for a valid secondary truncation point in the primary database. You can check this with dbcc gettrunc and look for the secondary trunc state, it should be 1 for valid. If its not then stop the rep agent, then run dbcc settrunc (‘ltm’, ‘valid’), then in the RSSD run rs_zeroltm <primary server>, <primary db> and then back in the primary db start up the repagent and again check for the secondary truncation point.
Assuming the above is fine we now Create the Database replication definition using RepAdmin login: create database replication definition pubs2_rdf with primary at <Primary server>.<Primary DB> replicate ddl go
Define subscription to database rep def using RepAdmin login: define subscription pubs2_sub for database replication definition pubs2_rdf with primary at <Primary server>.<Primary DB> with replicate at <Replicate server>.<Replicate DB> use dump marker go
One very important thing to note with create or define subscription is that the Login you use to run this, either interactively or through a script, must exist on the primary (and secondary) dataserver and the RepServer with the same name and password. You can check the creation of the subscription in the RepServer errorlog, there should be a line saying “Placing marker in <Primary server>.<Primary db> log”
Dump the Primary Database Once the dump is complete you should check the RepServer errorlog again, as soon as the dump marker gets processed there should be an entry in the log like: “Distributor for ‘<Primary server>.<Primary DB>’ received and processed enable marker. Distributing commands to replicates. Distributor for ‘<Primary server>.<Primary DB>’ received and processed dump marker. The DSI thread for database ‘<Replicate server>.<Replicate DB>’ is shutdown.”
Now Load the dump into the Replicate DB and Online the database
You now add the maint user as an alias to dbo (or equivalent permissions) for the replicate database, unless it has exactly the same suid on both servers in which case it will already be aliased to dbo but check this.
Now remove the secondary truncation point from the replicate database with dbcc settrunc (‘ltm’, ‘ignore’)
Resume connection to the replicant database dsi Resume connection to <Replicate server>.<Replicate DB>
That should be it, you should now have database replication set up between pubs2 and pubs2_rep, which you can test by creating a new table and entering some data in pubs 2 and checking it gets created and sent across to pubs2_rep.
BTW to allow replication of DDL statements you might need to enable ‘ddl in tran’ in the replicate database but I think this is already covered in the “Alter connection to <replicate server>.<replicate db>” Step but in case of any errors/issues just be aware of it. Use master go sp_dboption pubs2_rep, ‘ddl in tran’, true go
– First you need to create the backup/log shipping directory at the primary and you need to share this folder and add full permissions for the primary dataserver SQL Server AND SQL Agent service accounts and read permissions for the secondary dataserver’s SQL Server AND SQL Agent service accounts.
– You need to repeat this at the destination server giving read access to the primary server sql server service account and read/write to the secondary dataserver’s SQL Agent AND SQL Server service accounts.
– You’ll next need to create the secondary database, this obviously needs to be sized the same as the primary database and make it simple recovery mode.
– You can now fire up the log shipping wizard from the primary dataserver by ticking the log shipping radio button under properties for the database.
NOTE: The wizard is a bit rubbish and should not be used to set up log shipping, but it is useful for generating the various stored procs which need to be run on the primary and secondary dataservers.
– The wizard is fairly self explanatory, You select the Primary dataserver log shipping directory as the backup folder (use network share path). Also and VERY IMPORTANT make sure that you select “No, the secondary database is initialised” in the Initialize Secondary database tab. Also select the secondary logshipping directory(configured in step 2 above) as the “destination folder for copied files” in the Copy Files tab. The final step is to select Standby mode and “disconnect users” from the Restore transaction log tab.
– Click the button “Script Configuration” to output all this information to a new query window and save it, you can then close the wizard.
– First make sure the primary database is set to Full logged mode in properties then back it up into the log shipping directory created earlier.
– Copy this dump across to the secondary server log shipping directory.
– Now from the secondary server load the database previously created from this backup. Make sure you leave this database in RESTORE WITH STANDBY mode! Place the standby file in the secondary server logshipping directory.
– On the primary server run the following commands from the script you generated earlier; sp_add_log_shipping_primary_database to add a primary database. sp_add_schedule to add a schedule for the backup job, sp_attach_schedule and sp_update_job.
– (Optional) On the monitor server, execute sp_add_log_shipping_alert_job to add the alert job.
– On the secondary server, execute the jobs from the script, sp_add_log_shipping_secondary_primary, sp_add_schedule and sp_attach_schedule for the copy and restore jobs, and finally the sp_add_log_shipping_secondary_database followed by 2 sp_update_job procs for copy and restore
– On the primary server, execute sp_add_log_shipping_alert_job and sp_add_log_shipping_primary_secondary to add the required information about the new secondary database to the primary server and get things up and running.
MSSQL has many excellent tools which can be used for discovering performance bottlenecks.
Sp_monitor and dbcc perfmon
To get a snapshot of what the SQL Server is doing then these commands are very usefull.
Run sp_monitor twice with a gap of 30 seconds in between to get a high level overview of how busy the sql server is.
dbcc perfmon will give you much more detailed information on the state of the sql server but the command has been supervised by various SQL Server counters which can be used in the System monitor.
SSMS Reports
Sql Server has a set of excellent reports which are generated from the Dynamic Management Views, these are both at the server and database level. If you right click on the server or an individual database in the SQL Server Management Studio then you can select which report to run. You can even export the report output to a pdf. This will highlight any problems which you can then focus on.
System monitor
To drill down further the first port of call should be system monitor or perfmon, which is started by typing perfmon into the run box.
One excellent feature of this is that you can export to a Counter Log at the same time as running a SQL Profiler session, (discussed later), you can then merge them which will let you see how bad queries affect the CPU, memory etc or vice versa.
The objects to normally check for in system monitor are:
Memory, Network Interface, Physical Disk, Processor, SQLServer:Buffermanager, SQLServer:Databases, SQLServer:Exec Statistics, SQLServer:General Statistics, SQLServer:Latches, SQLServer:Locks, SQLServer:Memory Manager, SQLServer:PlanCache, SQLServer:SQL Statistics and System.
SQL Profiler
The system monitor might highlight the bad performance but you will typically need to drill down a bit further to find the bad queries which are causing the problems. To do this you use SQL Profiler.
First step is to establish a performance baseline as follows;
– Start Profiler, Select file, New trace and connect to the relevant instance.
– Specify a trace name, template(standard) and a location to save the trace file too. Set the maximum fiel size to 50MB and enable file rollover.
– In the events selection tab remove Audit Login, Audit Logout, ExistingConnection and SQL:BatchStarting radio boxes.
– Click on show all columns and select the TextData, NTUserName, LoginName, CPU, Reads, Writes, Duration, SPID, StartTime, EndTime, BinaryData, DatabaseName, ServerName and ObjectName coulumns.
– Click on Column Filters and select the Database name to be the database you are interested in.
– Click Run to start the trace and simultaneously start the System Monitor Counter Log.
Correlating the counter Log with Profiler Trace
– Start Profiler, Select File, Open and select the trace file you created from the Performance baseline previously.
– Next go to File, select Import Performance Data and select the Counter Log file you created simultaneously with the performance baseline trace.
– In the performance Counters Limit Dialog window select Network Interface: Output Queue Length, Processor:% processor Time, System:Processor Queue Length and SQLServer:Buffer Manager:Page life expectancy. Click OK.
– You can then scroll through the trace and find what was happening at peak activity on the Windows machine and also what effect certain commands had on the overall server performance.
— Various parameters that can improve the performance of the Replication server
rs_configure memory_limit, ‘200’
— dsi_sqt_cache_size
alter connection to DCOCCDWP.defacto set dsi_sqt_max_cache_size to ‘62914560’
go
suspend connection to DCOCCDWP.defacto
go
resume connection to DCOCCDWP.defacto
go
— Cache system tables
configure replication server set sts_full_cache_rs_classes to ‘on’
configure replication server set sts_full_cache_rs_databases to ‘on’
configure replication server set sts_full_cache_rs_columns to ‘on’
configure replication server set sts_full_cache_rs_config to ‘on’
configure replication server set sts_full_cache_rs_datatype to ‘on’
configure replication server set sts_full_cache_rs_functions to ‘on’
configure replication server set sts_full_cache_rs_objects to ‘on’
configure replication server set sts_full_cache_rs_publications to ‘on’
configure replication server set sts_full_cache_rs_queues to ‘on’
configure replication server set sts_full_cache_rs_repdbs to ‘on’
configure replication server set sts_full_cache_rs_repobjs to ‘on’
configure replication server set sts_full_cache_rs_diskaffinity to ‘on’
configure replication server set sts_full_cache_rs_routes to ‘on’
configure replication server set sts_full_cache_rs_systext to ‘off’
configure replication server set sts_full_cache_rs_sites to ‘on’
configure replication server set sts_full_cache_rs_version to ‘on’
configure replication server set sts_full_cache_rs_users to ‘on’
configure replication server set sts_full_cache_rs_translation to ‘on’
go
rs_configure sts_cachesize, ‘3500’
go
/* REP15 extra settings:
— Parallel dsi
alter connection to DCOCCDWP.defacto set parallel_dsi to ‘on’
go
suspend connection to DCOCCDWP.defacto
go
resume connection to DCOCCDWP.defacto
go
rs_configure exec_sqm_write_request_limit, ‘983040’
go
rs_configure md_sqm_write_request_limit, ‘983040’
go
rs_configure smp_enable, ‘on’
go
*/
— Also a good idea to use disk affinity to bind connections to their own partitions:
— e.g alter connection to DCO.ProductCatalog_new set disk_affinity to ‘P08’
1. First you need to create a new directory on Unix for the installation, e.g /progs/fps/sybase15, and copy the installation tar file to the unix machine, and uncompress and untar it. If there is no room any other place on the machine just place it into a subdirectory under sybase15.
2. Make sure that the os has the necessary patches, if not arrange for them to be installed.
3. You will need to ask the Unix sa to create a new raw device for you, sybsystemdbdev of about 128MB for the new sybsystemdb database, e.g /dev/vg_fps/rlv_systemdbdev and also check the size of the sybsystemprocs device, it should be at least 160MB in size, if it’s less ask the Unix sa to create a new raw device for you, e.g /dev/vg_fps/rlv_sysprocsdev2
4. Run setup to install ASE15 into its own separate directory (setup -console ), e.g sybase15, answer no to the question of licenses, we will fix this later on. So specify:
– Change the installation directory to e.g /progs/fps/sybase15
– Select Full installation
– If asked about licenses select Continue installation without a license key
– Enterprise Edition
– SR : Server License
– The answer yes to continue without license.
– No to configure email alerts
Check that all the options are correct, especially the install directory, and click 1 to install the files.
– Answer NO to the remember password feature question.
– Deselect all the servers from configuration, we are not building the server now.
5. You should install the latest patch for this version of ASE which can be downloaded from sybase.com, just remove the original installation files and replace them with the patch files then run the patch. Do the patching after the sybase server is upgraded.
6. Make sure the existing Sybase server is sourced and is displayed in DSQUERY etc then run the script /progs/fps/sybase15/ASE-15_0/upgrade/preupgrade specify the sa password for the existing server to check that everything is in place for an upgrade. This programme will let you know if you need to increase any parameters etc and also if you need to expand any databases. Answer NO at the end when it asks if you want to proceed with the upgrade, this is just a precheck. You will most probably need to extend sybsystemprocs and sybsystemdb onto the new unix devices created earlier and also extend model to at least 4MB, in fact all user databases must be at least 4MB. If everything is fien then the last two lines should say:
Preupgrade of Adaptive Server to 15.0 is complete.
Upgrade eligibility test succeeded.
7. You now need to add the following 3 environment variables to the SYBASE.csh file:
OLDSYBASE= current SYBASE variable
OLDSYBASE_ASE=current SYBASE_ASE
OLDSYBASE_OCS=current SYBASE_OCS
8. Source the new SYBASE.csh and create a resource file in $SYBASE/$SYBASE_ASE/init/sample_resource_files from the template where you fill in the values for the old $SYBASE path, Sybase server name, Login (sa), and the password for current sa login on existing server.
9. Turn off auditing on the existing sybase server and completely uninstall the sybsecurity database and sybaudit and reboot the Sybase server. Do an sp_configure “auditing”, 1 to make sure the auditing subsystem is switched off, VERY IMPORTANT!!
10. You then run the upgrade with $SYBASE/$SYBASE_ASE/bin/sqlupgraderes –r $SYBASE/$SYBASE_ASE/init/sample_resource_files/<resource file>
Upgrade the RSSD database, start with the ID server.
Install the new software for replication server version 15 on your system. Put it in a new directory e.g /progs/torigodb/syb_rs15. After gunzipping and untarring the rep server tar ball run ./setup –console:
– Accept the license agreement and select Full installation.
– Answer no to “Will licenses be obtained from the License server”.
– No to configure email alerts
– No to start a sample rep server
You can then install any required Sybase patches, e.g ESD1
Generate a license key (through sybase.subscribenet.com) and store it in the SYSAM-2_0/licences directory.
Copy any custom scripts over from the old repserver, e.g .cshrc, repscripts etc, and modify as necessary, i.e change SYBASE path etc.
Run “rs_helpsub” from the rssd database to verify that all subscriptions are valid. The status only needs to be Valid at the replicate replication server.
Copy the interfaces file (or interfaces) from the pre version 15 to the version 15 installation.
Log on the RSSD ASE server and grant the primary user the sa_role “grant role sa_role to <username>” if it doesn’t already have it. You can determine the name of the primary user by looking into the configuration file, at the keyword RSSD_primary_user.
Log on to the replication server and run “admin health” to see if everything is ok.
Shutdown the replication server, backup the RSSD database then restart the replication server.
Leave repserver running.
Set the environment variable $SYBASE to the new REP15 installation directory
Set the environment variable SYBASE_REP to REP-15_0
Select “Configure a Server product” -> “Replication server” -> “Upgrade an existing replication server”.
Answer all the questions. The configuration file should be the path and filename of the old config file.
Do Ctrl-A to start.
After a succesfull upgrade
Repoint the replication server paths to the new installation and copy the old runserver file to it’s new location. Make sure to change the paths in the runserver file to pick up the new version 15 executable and correct config files etc. Also copy over the old config file.
Fix or copy any other files bespoke files, You need to update the file $SYBASE/config/objectid.dat file and add a line similar to
To perform any heterogeneous replication, replication to and from two different datasources e.g Sybase to Oracle or DB2 to Oracle, you need to use Direct Connect or as it’s now being called Enterprise Connect Data Access.
The below example is how to set up a connection to a new Oracle database but it can be applied to other databases as well obviously with some different scripts etc.
In the below example Direct connect is already installed but unloading the software is not so difficult anyway, please refer to product manuals for details.
1. First you will need to create a new Direct Connect process by creating a new config file in the $SYBASE/$SYBASE_DCO/install directory on the server where Direct Connect is installed. Just copy the existing .cfg file and RUN file and alter to them to reflect the new settings. So in the RUN file just change the servername and in the .cfg file add the new servername, username (in this case the oracle username), the path to the new errorlog file and the connect_string.
2. Add an entry for this new Direct Connect process in the local interfaces file like:
DCOCCDWP master tcp ether p-***-tm16 **** query tcp ether p-***-tm16 ****
3. Also add this into the interfaces file for the REPServer and the ASE server which is acting as the RSSD for the Repserver.
4. Add the Oracle connection information into the tnsnames.ora file which is in $SYBASE/$SYBASE_DCO/network/admin :DWMU = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ***.**.***.**)(PORT = ****)) ) (CONNECT_DATA = (SERVICE_NAME = DWMU) ) )
5. You can now start the new Direct Connect process by running e.g startserver –f RUN_DCODWMU.
6. You can check that it is working by trying to log into the oracle tablespace with e.g isql –Udefacto –SDCOCCDWP –P<oracle password>.
7. You now need to go to the Repserver and make a copy of the following files in the $SYBASE/$SYBASE_REP/scripts directory :
hds_oracle_udds.sql hds_clt_ase_to_oracle.sql hds_oracle_funcstrings.sql hds_oracle_setup_for_replicate.sql hds_oracle_connection_sample.sql
For the first three scripts you need to edit the scripts and enter your RSSD database name and execute the scripts against the relevant ASE server housing the RSSD database for the Repserver, e.g isql –Usa –Sdhp_SOLP–DREP_dhpsolo_RSSD -i$SYBASE/$SYBASE_REP/scripts/DCOCCDWP_ hds_oracle_udds.sql
The script hds_oracle_setup_for_replicate.sql needs to be copied over to the direct connect unix box and is run via Direct Connect against the replicate database, e.g isql -Ucm –P<oracle password> -SDCODWMU -ihds_oracle_setup_for_replicate.sql
8. The last step is to create the connection to the Oracle database from the repserver and that is done by modifying the hds_oracle_connection_sample.sql script so that it looks similar to:
create connection to DCOCCDWP.defacto set error class to rs_sqlserver_error_class set function string class to rs_oracle_function_class set username to ******* set password to ********** set ‘batch’ to ‘off’ go
You then run this script against the repserver.
9. To test that everything is ok you can try to connect to the oracle database form the replication server with e.g isql –Udefacto –SDCOCCDWP –P<oracle password>
If this connects then everything should work, at least in terms of connectivity. If it doesn’t work then you will probably have to open up the firewall both ways between the replication server port and the direct server port, e.g between pkg_solp (***.**.***.***) repserver port (2040) and dhp-tm16 (**.***.*.**) dco port (****).
10. You then continue to define the rep defs, subscriptions etc.