Download presentation
Presentation is loading. Please wait.
1
Backup and Restore utilities
Information Management Partner Technologies
2
Overview of Backup and Restore Ontape utility Onbar utility Appendix
Agenda Overview of Backup and Restore Ontape utility Onbar utility Appendix 2 2
3
Concept of Backup
4
Types of Backups Server sends raw online pages just like they exist on disk FULL Usually considered a complete backup of all devices to a known point of consistency Contains the physical image of device pages, logical logs, and any updates occurring during the backup Might be run every day, or weekly depending upon needs INCREMENTAL A backup of any changes since the last full backup Usually faster than full backups as only a portion of the instance is saved Incremental backups will gradually get longer as more changes are made, and a full backup is not done
5
Incremental Backups Level 0 Level 1 Level 2 A level 0 backup contains a copy of all data on the server as of the time the backup started A level 1 backup contains a copy of all data on the server that has changed since the last level 0 backup Generally faster than a level 0 backup A level 2 backup contains a copy of all data on the server that has changed since the last level 1 or level 0 backup Generally faster than a level 0 and level 1 backup ON-Bar and ontape provide three different levels of backing up data: level-0, level-1 and level-2 backups, which also called as physical backup. Level-0 A level-0 backup contains a copy of all the data in the server system or specified dbspace(s) in the state in which it existed at the time the backup was initiated. A level-0 is the baseline backup. Important!! If disks and other media are completely destroyed and need to be replaced, you need a level-0 backup of all storage spaces and relevant logical logs to restore data completely. Level-1 A level-1 backup contains a copy of all the data in the server system or specified dbspace(s) that has changed since the last level-0 backup, in the state in which it existed at the time the backup was initiated. Level-2 A level-2 backup contains a copy of all the data in the server system that has changed since the last level-1 backup. Informix will not allow a level-2 backup directly after level-0.
6
Logical-Log Backups Log 1 Log 2 …… A logical-log backup is the process of copying the contents of a logical log file to secondary storage media. This allows the logical log to be reused. Log n Without logical-log backups, you can only recover a failed system from full backups Logging occurs even in non-logging databases In order to successfully complete a logical restore, the logical-log backup must have been created with the same backup utility used to create the physical backup A logical-log backup is the process of copying the contents of a logical-log file to secondary storage media. The logical logs store records of checkpoints, administrative activities such as DDL statements, and transaction activity for the databases in the server instance. Every server instance has a finite number of logical logs. The server uses the logical logs in a circular method. Records are written to the logical log files serially. When the first log fills up, the server begins writing to the second log, and so on. When all the logs have been used, the server will begin writing to the first log again. Before the server can reuse a log, it must be backed up. Even if none of the databases in the server instance use transaction logging, the logical logs should be backed up.
7
Logical Log Contents The logical logs store several types of transaction records Each logical-log record has I information Depending on the record type, additional information may also be stored Whenever a checkpoint occurs or a dbspace or chunk is added, a log record is written addr Log-record address (log position) Hexadecima llen Record length in bytes Decimal Type Record-type Name ASCII Xid Transaction number Decimal Id Logical-log number Decimal Link Link to the previous record in the transaction Logical-Log Record Types and Additional Columns In addition to the six header columns that display for every record, some record types display additional columns of information. The information that appears varies, depending on record type. The Action column indicates the type of database server action that generated the log entry. The Additional Columns and Format columns describe what information appears for each record type in addition to the header.
8
When are Logs Backed Up? On-Demand Automatic Continuous
Administrator or operator issues a command All used logical logs are backed up Automatic Triggered by ALARMPROGRAM Logical logs are backed up when a log full event is triggered Continuous Running process backs up each logical log as it fills Logical log backups may be executed on-demand by an administrator or operator, or automatically triggered using the ALARMPROGRAM configuration parameter. On-Demand logical log backups are performed when an administrator or operator executes a log backup request using either onbar or ontape. Automatic logical log backups are configured by specifying a program, using the ALARMPROGRAM configuration parameter, that executes a logical log backup command whenever a log full event is issued by the server. Typically, if you choose to use onbar as your backup utility, you will configure automatic logical log backups. The ontape -c or onbar -C options continuously backs up each logical log as it fills or as the server switches to the next log. Continuous logical log backups require a dedicated backup device. Restoring logical log records is slow compared to restoring pages from a dbspace backup. To minimize the number of logical log records that must be applied during a recovery, create frequent dbspace backups.
9
Physical and logical Restore
10
Types of Restores Cold Warm Mixed
A restore that occurs when the root dbspace or a dbspace that holds the physical or logical logs is inaccessible Instance is offline Warm A restore that occurs when the root dbspace and the dbspace(s) containing the physical and logical logs are accessible The server must be in online, quiescent or single-user mode Mixed Cold restore followed by a warm restore Several scenarios can occur that would require a restore: The entire server system is unavailable (you cannot bring the server to online mode). A critical dbspace, such as the root dbspace or the dbspace that holds the logs is unavailable (one or more of the chunks are marked as down). A non-critical dbspace is unavailable (one or more of the chunks and their associated mirror chunks are marked as down). The first two scenarios require a cold restore. A cold restore means that the server is in off-line mode and cannot be used until the critical dbspaces are restored. The third scenario requires a warm restore.
11
Cold Restore
12
Warm restore
13
Log Salvage
14
Archecker – Table-Level Restore
Utility to restore or copy data Restore data Use the archecker utility to restore a specific table or set of tables that have previously been backed up with ON-Bar or ontape These tables can be restored to a specific point in time This is useful, for example, to restore a table that has accidentally been dropped Copy data The archecker utility can also be used as a method of copying data For example, you can move a table from the production system to another system Extract specific pages off the archive. Is useful where portions of a database, a table, a portion of a table, or a set of tables need to be recovered. It is also useful in situations where tables need to be moved across server versions or platforms.
15
Backup and Recovery strategy
16
ontape Simple to set up and run Serial backup of dbspaces
The size of the buffers used to transmit data - Controlled by TAPEBLOCK config param Simple to set up and run Serial backup of dbspaces Whole system backup only Restore entire system or single dbspace STDIO (Informix 10 feature) Back up to standard output (stdout) and restore from (stdin) Not interactive, easier to automate, introduces flexibility ontape -t option can be used to override TAPEDEV IFX_ONTAPE_FILE_PREFIX An environment variable that ontape will use to change the prefix (<hostname>_<servernum>) portion of the backup file name(s) ontape STDIO option (IDS 10) Back up to standard output (stdout) and restore from (stdin) Specifying STDIO allows ontape to use OS pipes for archives and restores --introduces flexibility ! ontape -t option can be used to override TAPEDEV ontape -s -L 0 -t STDIO | gzip > dbserver_L0_ gz Error and information messages are written to stderr No prompt or user interaction No prompts for log salvage, to confirm restore, or for log restore. To salvage logical logs, use the ontape -S command prior to the physical restore Perform physical restore with ontape -p 10 second delay between archive info display and starting restore gunzip -c dbserver_L0_ gz | ontape -p -v -t STDIO To perform logical restore, use ontape -lafter the physical restore. New Feature in IDS 11 ! Can specify a directory for TAPEDEV, LTAPEDEV Can be a local directory or a mounted directory ontape creates a singlefile in the backup directory for each level of an archive, and for each log file <hostname>_<servernum>_L<#> <hostname>_<servernum>_Log<##########> Allows ontape backups to be automated First L0 backup is renamed when the second backup is taken. – The backup file <hostname>_<servernum>_L0 is renamed to <hostname>_<servernum>_<YYYY-MM-DD_HHMMSS>_L0 • Directory of C:\IDS 08/06/ :22 PM <DIR> . 08/06/ :22 PM <DIR> .. 08/06/ :49 PM 16,941,056 IBM-MACH_0_ _144941_L0 08/06/ :22 PM 16,941,056 IBM-MACH_0_L0 Change prefix <hostname>_<servernum>by using the environment variable IFX_ONTAPE_FILE_PREFIXin the ontape environment. Timestamp is from the archive, so it’s based on when the archive was taken
17
ontape - Threads Scanner Thread (arcbackup1) Backup Thread (ontape)
Started by backup client Initiate the backup process Request a checkpoint Blocks server for archive Prepares list of pages for backup Forks two other backup threads Interacts with backup client Sends buffers to client for backup The thread, geared for I/O performance Handed a list of pages to backup Scans data from disk into shared memory buffers Makes NO decisions about the data Ensures the page address is correct (format) Ontape The name of this thread is always ontape regardless of the archive client used General coordinator of the backup session Responsible for starting the two arcbackup threads Passes errors to the client Arcbackup1 This thread is called archive scanner The DUMB thread Given a list of pages it sends them to the archive client, concentrating exclusively on I/O Checks the format of the pages Arcbackup2 This thread is called “Before Image Processor” The thinker Responsible for collecting all the images that are modified during the archive Manager of the temp tables the archiver creates Able to create multiple temp tables for a single dbspace Before Image Processor Thread (arcbackup2) Monitors the before image queues Responsible for collecting all the images that are modified during the archive Drains the before image memory queue, by storing the page images into temp tables Can create multiple temp tables for a single dbspace if required
18
Configuration parameters for backup and recovery
ontape TAPEDEV -- device path for backups. To use standard I/O, set to stdio. TAPEBLK -- media block size, in KB, for backups TAPESIZE – media capacity for a backup tape. Acceptable values are 0 (unlimited) or \ any positive integral multiple of TAPEBLK LTAPEDEV -- device path for logical log backup device LTAPEBLK -- media block size, in KB, for logical log backups LTAPESIZE – media capacity for a logical log backup tape. Acceptable values are 0 (unlimited) or any positive integral multiple of LTAPEBLK Introduction to Informix Dynamic Server 18
19
Backup and Restore options of ontape
Product documentation is available in many formats. Online Information Center (IC) is free on the Web and it contains the same information as the PDF library, which is shipped on CDs and can be ordered from the IBM Publication Center. We are investigating how to provide a downloadable IC in the future. Examples exchange Web site will contain examples that are provided on an "as-is" basis. Use the examples as models for your own situations. You will be able to rate each example. Also, you will be able to sort examples by date, topic or rating. Migration portal can help you navigate through the available information and resources related to migrating Informix database products to a new release.
20
onbar Requires storage manager
Communicates with the Storage Manger via XBSA Interface Able to support a variety of XBSA compliant storage managers Parallel backup and restore of storage spaces Point-in-time Restore Restartable Restore Unattended operations – need to look at BAR_ACT_LOG Integrated archive validation – onbar -v Filtering during Backup and Restore (IDS 11) Automating Backups New options with IDS 11 with SQL Admin, Database Scheduler, and OAT Transform data with external filter programs You can use external programs to transform data to a different format prior to a backup and transform the data back to its original format following a restore. These programs are called filters. Filters can be used for compression or other data transformations. ON-Bar and ontape both call the filters with the path specified by the BACKUP_FILTER and RESTORE_FILTER configuration parameters. Archive of selected dbspaces Table Level Restore Point-in-time Recovery Restartable restore Ability to use a Storage Manager Parallel archives and restores Archive to STDIO XBSA interface ON-Bar and the storage manager communicate through the X/Open Backup Services Application Programmer's Interface (XBSA), which enables the storage manager to manage media for the database server. By using an open-system interface to the storage manager, ON-Bar can work with a variety of storage managers that also use XBSA. Each storage manager develops and distributes a unique version of the XBSA shared library. You must use the version of the XBSA shared library provided with the storage manager. For example, if you use ISM, use the XBSA shared library provided with ISM. ON-Bar and the XBSA shared library must be compiled the same (32-bit or 64-bit). ON-Bar uses XBSA to exchange the following types of information with a storage manager: Control data ON-Bar exchanges control data with a storage manager to verify that ON-Bar and XBSA are compatible, to ensure that objects are restored to the proper instance of the database server and in the proper order, and to track the history of backup objects. Backup or restore data During backups and restores, ON-Bar and the storage manager use XBSA to exchange data from specified storage spaces or logical-log files. ON-Bar uses XBSA transactions to ensure data consistency. All operations included in a transaction are treated as a unit. All operations within a transaction must succeed for objects transferred to the storage manager to be restorable.
21
onbar - Archive or Restore Model
The archive is broken down into archive jobs with each dbspace being its own backup An onbar_d thread is started to backup a single dbspace Connects to database server and Storage manager requesting the backup session Updates sysutils and ixbar file To process a backup request, ON-Bar opens a session with the server and submits a request for the dbspaces, blobspaces, or logical logs to be backed up. The dbspaces, blobspaces, and logical logs are referred to as database objects. The server retrieves the data requested and places it in shared memory, returning shared memory pointers to the onbar process. ON-Bar passes these pointers to the storage manager. The storage manager is responsible for retrieving the data and transferring it to the backup device. ON-Bar follows the XBSA transaction and object models described in the X/Open System Management: Backup Services API. There is one backup object exchanged per XBSA transaction, and multiple XBSA transactions per backup or restore session. NOTE: During a backup, if ON-Bar encounters a dbspace that is down, it returns an error. If it encounters a blobspace that is down, it skips the blobspace. ON-Bar uses the following tables in the sysutils database. Table Description bar_action Lists all backup and restore actions that are attempted against an object, except during a cold restore. Use the information in this table to track backup and restore history. bar_instance Writes a record to this table for each successful backup. ON-Bar might later use the information for a restore operation. bar_object Describes each backup object. This table provides a list of all storage spaces and logical logs from each database server for which at least one backup attempt was made. bar_server Lists the database servers in an installation. This table is used to ensure that backup objects are returned to their proper places during a restore.
22
onbar - Performance and Parallelism
ON-Bar uses buffers, called transport buffers, to receive or transmit data to the storage manager BAR_XFER_BUF_SIZE Size of the buffers Default = 31 or 15 (2K / 4K port) BAR_NB_XPORT_COUNT Number of buffers (3-99) Default 10 BAR_MAX_BACKUP Number of backup processes Default 4 Significantly affects ON-Bar performance BAR_MAX_BACKUP When ON-Bar is executed, it will attempt to perform the request using as much parallelism as possible. ON-Bar will fork one child process to back up each object requested up to BAR_MAX_BACKUP. If a dbspace backup of the entire system is requested for a system with a rootdbs, logical log dbspace, physical log dbspace and 30 data dbspaces and BAR_MAX_BACKUP is 40, ON-Bar would attempt to fork 33 child ON-Bar processes. BAR_MAX_BACKUP must be configured to be consistent with the storage manager configuration. BAR_MAX_BACKUP and the storage manager may be configured to support one backup process or stream per device or if multiplexing devices are used, to the number of devices times the number of streams supported per device. If BAR_MAX_BACKUP is 0, the parallelism will be unlimited except for limits imposed by the operating system and hardware. BAR_NB_XPORT_COUNT To maximize performance, you may want to experiment with various settings of BAR_NB_XPORT_COUNT. ON-Bar will allocate up to (BAR_MAX_BACKUP * BAR_XFER_SIZE * BAR_NB_XPORT_COUNT) pages of memory in the virtual segment to accommodate the transport buffers. You may want to begin testing by setting BAR_NB_XPORT_COUNT as large as possible, without exceeding available memory, given the number of concurrent ON-Bar processes and the amount of memory available for the virtual segment. For example, with 33 onbar processes, if BAR_NB_XPORT_COUNT is set to 40, we will allocate approximately 85 MB of memory for ON-Bar transport buffers. Once you calculate the maximum available memory and begin testing with the maximum setting of BAR_NB_XPORT_COUNT, continue trying lower values until you see performance begin to slow down. Ideally, choose the lowest possible setting that allows you to maintain the optimal backup and restore performance. The default value is 10, the minimum is 3. Warning!! If you set BAR_NB_XPORT_COUNT too high, you may induce paging or swapping in the system. BAR_XFER_BUF_SIZE BAR_XFER_BUF_SIZE is limited by the XBSA standard. XBSA limits the communications buffer to 64 KB in size, and one page is reserved by IBM Informix for header information. Specifies the size of each transport buffer used to exchange data with the server. The buffer size is BAR_XFER_BUF_SIZE multiplied by PAGESIZE. The default is 31 for ports specifying a 2 KB page size and 15 for 4 KB ports. The default sizes are also the maximum allowable sizes. This maximum is determined by the XBSA standard which defines the maximum buffer size to be 64 KB with 1 KB reserved for header information. Warning!! Do not change BAR_XFER_BUF_SIZE! If you change the transport buffer size, you may invalidate all your existing backup objects. For example, imagine that you perform a level 0 dbspace backup of your system using the default size of 31 pages. Later, you change the transport buffer size to 30 pages. After changing the transport buffer size, you create a level 1 backup and continue to collect logical log backups. If you attempt to restore your level 0 backup, created with the 31 page transport buffer size, onbar will report that the level 0 backup objects are corrupted or invalid. If you change the BAR_XFER_BUF_SIZE back to 31, you will be able to restore the level 0 backup objects, but onbar will not allow you to restore the level 1 backup objects or the logical logs. Because they were created with a different transport buffer size, onbar does not recognize these backup objects as valid. ON-Bar uses buffers, called transport buffers, to receive or transmit data to the storage manager. These configuration parameters allow you to configure the size of the buffers, (BAR_XFER_BUF_SIZE), the number of buffers (BAR_NB_XPORT_COUNT) and the number of backup processes (BAR_MAX_BACKUP).
23
onbar - Performance Factors
BAR_MAX_BACKUP onbar will fork as many processes up to the value set (parallelize operation) If set to 1, operation is serialized If set to 0, the parallelism will be unlimited except for OS limits BAR_NB_XPORT_COUNT Applies to each onbar backup stream Increase the amount of virtual memory “buckets” used If there are more buckets to fill with data, then less waiting for buckets to become empty Virtual Memory used ((BAR_NB_XPORT_COUNT * BAR_XFER_BUF_SIZE * page_size) + 5 MB) * # of dbspaces archived in parallel BAR_XFER_BUF_SIZE Max value: 15 pages (4k pagesize) & 31 pages (2k pagesize); max 64KB Do not change - If you change the transport buffer size, you may invalidate all your existing backup objects! BAR_XFER_BUF_SIZE The maximum buffer size is 64 kilobytes, so BAR_XFER_BUF_SIZE * pagesize <= 64 kilobytes To calculate how much memory the database server needs, use the following formula: memory = (BAR_XFER_BUF_SIZE * PAGESIZE) + 500 The extra 500 bytes is for overhead. For example, if BAR_XFER_BUF_SIZE is 15, the transfer buffer should be 61,940 bytes. BACKUP and Restore Increasing the BAR_NB_XPORT_COUNT will increase the amount of memory used by onbar transport buffers in shared memory (but having more transport buffers means the database server has more buckets to fill with data, thus less waiting for buckets to become empty). IBAR_NB_XPORT_COUNT - If onbar backup is backing up 5 dbspaces at once, the required memory for the transport buffers below will need to be multiplied by 5 The BAR_NB_XPORT_COUNT applies to each onbar backup stream. The rootdbs is always backed up by itself, then the other dbspace backups will be parallelized (onbar will fork more processes) based on BAR_MAX_BACKUP setting (serial if BAR_MAX_BACKUP is 1). So, if onbar backup is backing up 5 dbspaces at once, the required memory for the transport buffers below will need to be multiplied by 5. (from documentation) required_memory = (BAR_NB_XPORT_COUNT * BAR_XFER_BUF_SIZE * page_size) + 5 MB The BAR_XFER_BUF_SIZE -- max value is 15 for AIX (4k base pagesize) and 31 for Solaris (2k base pagesize). If you set the numbers to be bigger than the maximum specified for the platform, internally, the number will be reduced to the maximum for the platform. If an additional virtual segment is added by the database server purely due to the increased memory requirements because a backup is going on, you can always try removing the extra virtual segment by running the "onmode -F" command after the backup has completed. However, if there are database server activities going on that's using the added virtual segment (even after the backup has completed), the database server is not going to remove the new virtual segment (onmode -F command just asks the database server to free up memory that it can free up).
24
Emergency Boot File “ixbar”
Located in $INFORMIXDIR/etc/ixbar.<servernum> Contains an entry for each backup object Replaces sysutils database when server is off-line during restore The emergency boot file stores information about each backup object. It replaces the sysutils database if you must perform a restore operation while the server is in off-line mode. If the emergency boot file is destroyed or corrupted, you may create a new file with the onsmsync utility. BAR_HISTORY BAR_HISTORY, was introduced in conjunction with a new onbar utility, onsmsync. The onsmsync command can be used to purge old records in the sysutils database. Onsmsync will also purge old lines from the emergency boot file or generate a new emergency boot file if the original becomes corrupted. You can control whether onsmsync removes records from the sysutils database or just updates their status to indicate that they relate to an expired object, by setting the BAR_HISTORY configuration parameter. If BAR_HISTORY is set to 0, onsmsync removes all rows for expired objects from the bar_object, bar_action, and bar_instance tables. If BAR_HISTORY is set to 1, these rows are not removed, but are updated with a 7 value in the act_type column of the bar_action table. BAR_PROGRESS_FREQ Allows you to control how frequently onbar generates a percent complete message during processing. The interval is specified in minutes and should be set to an integer value n, where n = 0 or n >= 5. The messages are written to the log specified by BAR_ACT_LOG. BAR_PERFORMANCE Valid values of BAR_PERFORMANCE are 0,1,2 or turn performance monitoring off 1 - display the time spent transferring data between the server and storage manager 2 - display sub-second accuracy in the timestamps 3 - display both timestamps and transfer statistics IXBAR file (IDS v.11): Column Column Name Server name Object name Object type Whole System backup Action ID Backup Level Saveset ID High Saveset ID Low Backup start time On-BAR Version First log needed for dbspace restore Checkpoint time Requested Action ID Object verified Verify Date Checkpoint Log Time of checkpoint log close Time of previous log close Backup Order
25
onbar Restores Cold Restore (onbar -r)
Restore started with database server off-line. Required when restoring critical dbspaces (root dbspace and any dbspace containing physical or the logical logs). Warm Restore (onbar -r --> restores only “down” dbspaces) Restore of non-critical dbspaces started with database server on-line (or in quiescent mode) Mixed Restore Restore critical dbspaces using cold restore to bring the instance on-line, then use warm restore to restore non-critical dbspaces. Point-in-Time Restore (onbar -r -t “ :12:00”) Must back up logs. Can restore the entire system to a specific log or to a point-in-time.
26
onbar Restores Whole-System Restore Requires a whole system backup.
Only type of restore that can be done without applying logical logs. onbar -r -w Can be done in parallel starting in IDS 11 Restore in stages Physical restore command, then logical restore command onbar –RESTART Must have had RESTARTABLE_RESTORE set to ON (default) in the ONCONFIG before you started the restore If the failure occurred during a physical restore (warm or cold), restarts the restore at the storage space and level where the failure occurred If a failure occurred during a cold logical restore, restarts the logical restore from the most recent log checkpoint Restartable logical restore is supported for cold restores only
27
onbar - BAR_MAX_BACKUP
Example: BAR_MAX_BACKUP 4 Means 4 onbar processes running at the same time Means 4 Backup Sessions going on in parallel in database server Means 4 sets of transport buffers created in database server BAR_NB_XPORT_COUNT 20 - Each set of transport buffer has 20 buffers BAR_XFER_BUF_SIZE 31 - Each transport buffer holds 31 BASE page sizes (2K * 31 = 62K) BAR_XFER_BUF_SIZE 15 - Each transport buffer holds 15 BASE page sizes (4K * 15 = 60K)
28
Parallel Whole System Backups and Restores
NEW in IDS 11 Root dbspace is still backed up first and by itself Then only are the rest of the dbspaces backed up Parallelism is controlled by BAR_MAX_BACKUP If BAR_MAX_BACKUP is set to 1 (one), then onbar -b -w performs a serial backup. If BAR_MAX_BACKUP is set to an integer value greater than 1, then onbar -b -w performs a parallel backup Only one archive checkpoint for all dbspaces at beginning of backup Regular backups perform a separate checkpoint for each space archived Whole system backups can be restored to a consistent point without any explicit logical log backups and restores A whole-system backup (onbar -b -w) is a backup of all storage spaces and logical logs based on a single checkpoint. That time is stored with the backup information. The advantage of using a whole-system backup is that you can perform a cold restore of the storage spaces with or without the logical logs. Because the data in all storage spaces is consistent in a whole-system backup, you do not need to restore the logical logs to make the data consistent. Level 0, 1, or 2 backups are supported. For an example, see Perform a whole-system backup. Whether a whole-system backup is serial or parallel depends on the setting of the BAR_MAX_BACKUP configuration parameter: If BAR_MAX_BACKUP is set to 1 (one), then onbar -b -w performs a serial backup. If BAR_MAX_BACKUP is set to an integer value greater than 1, then onbar -b -w performs a parallel backup. Whole system backup means that there is a single archive checkpoint for all the spaces (dbspaces, blobspaces, etc.) In a regular parallel backup, each space can potentially have a different archive checkpoint you must restore logs to bring all the spaces in sync with each other
29
Monitor Backup and Restore – Server Side
Logical log status: onstat –l Backup logical logs using ontape/onbar –b -l before starting an archive Storage space status: onstat –d Ensure flags for dbspaces and chunks are “normal” prior to a backup Backup status: oncheck –pr Provides the same information as onstat -g arc but can be run when the system is offline Buffer queuing status: onstat -g stq Print stream queue information Backup thread status: onstat -g arc Display information about the last committed archive for each dbspace and also information about any current ongoing archives Onstat –l Displays information about the physical logs, logical logs, and temporary logical logs Onstat –d Lists all dbspaces, blobspaces, sbspaces, and chunks D Chunk is down L Storage space is being logically restored O Chunk is online P Storage Space is physically restored R Storage space is being restored
30
onstat -g arc Before-image bin is a temporary table created in a temporary/root dbspace Before-image queue list size and length Dbspaces - Ongoing archives num DBSpace Q Size Q Len Buffer partnum size scanner Current-page dbspace x x2033ee :118 dbspace x x302f1a :4201 Dbspaces - Archive Status name number level date log log-position rootdbs /04/ : x10b608 dbspace /04/ : x10b608 dbspace /04/ : x10b608 sbspace /04/ : x10b608 sbspace /04/ : x10b608 Current Page being Archived Use the onstat -g arc command to display information about the last committed archive for each dbspace and also information about any current ongoing archives. Output description - Ongoing archives ColumnDescription Number The number of the dbspace Name The name of the dbspace Q Size The before-image queue list size. This information is primarily for IBM® support. Q Len The before-image queue length. This information is primarily for IBM support. Buffer The number of pages used in the before-image buffer Partnum The partition number of the before-image bin Size The number of pages in the before-image bin Current-page The current page that is being archived Note: The before-image bin is a temporary table created in a temporary dbspace, or in the root dbspace if you do not have any temporary dbspaces. If the before-image bin becomes too small, it can extend to additional partitions, in which case the output will display see multiple Partnum and Size fields for the same dbspace. Output description - Archive status This output section contains information about the last backup that has occurred for each dbspace. Number The dbspace number Level The archive level Date The date and time of the last archive Log The unique ID (UNIQID) of the checkpoint that was used to start the archive Log-position The log position (LOGPOS) of the checkpoint that was used to start the archive Name and number of the dbspace Archive Level Date and time of the last archive
31
onstat -g stq – Monitor movement of Data
Shows if the archive client or archive server is running faster Inconsistent performance is an indication that more memory buffers can improve performance Stream Queue: (session 10 cnt 10) 0:ac8d400 1:ac9d400 2:acad400 3:acbd400 4:accd400 5:acdd400 6:aced400 7:acfd400 8:ad0d400 9:ad1d400 Full Queue: (cnt 9 waiters 0) 0:ac9d400 1:acad400 2:0 3:accd400 4:acdd400 5:aced400 6:acfd400 7:ad0d400 8:ad1d400 Empty Queue: (cnt 0 waiters 1) Chunk Number: Physical Address Session ID Use the onstat -g stq command to display information about the queue. To view queue information for a particular session specify the session option. Omit the session option to view queue information for all sessions. Use the onstat -g stq command to display information about the queue • Always taken from the perspective of the server • Shows if the archive client or archive server is running faster. Number of stream queue buffers Buffers/work by server Buffers being filled by the archive client Number of threads waiting for the stream queue buffer
32
The On-Bar Activity Log
Contains a log of ON-Bar activity BAR_ACT_LOG configuration parameter Specifies a file name and location Logs Status, warnings and error messages List of dbspaces/blobspaces and logical logs included in a backup/restore View Log onbar –m Format Because onbar was designed to be run in an unattended mode or by another executable, such as a storage manager, it does not produce any screen output during execution. Instead, the onbar process writes a description of its activity to a message file. ON-Bar also writes a message to the message file when an error or warning condition is encountered. By looking at the message file, you can determine whether a backup or restore operation succeeded. The message file also lists which database objects were included in a backup or restore operation and approximately how long the operation took. Message Log Format Viewing Recent ON-Bar Activity You can view recent ON-Bar activity using the onbar -m utility. Only users who have permission to perform backup and restore operations can use this option. onbar -m Prints last 20 lines of ON-Bar's recent activity from the activity log file onbar -m Prints last 50 lines of ON-Bar's recent activity from the activity log file Information written to the ON-Bar message log will use the following format: Timestamp | process_id | parent_process_id | message :46: onbar -b :46: Begin level 0 backup rootdbs. :46: Successfully connected to Storage Manager. PROCESS ID PARENT PROCESS ID TIMESTAMP MESSAGE
33
OnBar vs. OnTape Description Ontape OnBar
Use a storage manager to track backups and storage media? No Yes Backup selected storage spaces? Backup and restore storage spaces serially? Initialize high-availability data replication? Restore data to a specific point in time? Perform separate physical and logical restores? Backup and restore different storage spaces in parallel? Use multiple tape drives concurrently for backups and restores? Restart a restore after incomplete restore? Change logging mode for databases? Back up to a cloud? 33
34
34 34
35
Benefits High-Availability Data Replication (HDR)
Use Disaster Recovery Two identical servers on two identical machines Primary server Secondary server Fully functional server. All database activity – insert/update/deletes, are performed on this instance Sends logs to secondary server Read only server - allows read only query Always in recovery mode Receives logs from primary and replay them to keep in sync with primary When primary server goes down, secondary server takes over as standard server Benefits Simple to administer Little configuration required Just backup the primary and restore to secondary Primary Blade Server A <New Orleans> Building-A HDR Traffic Blade Server B <Memphis> Secondary Client Apps Read-Only
36
High-Availability Data Replication (HDR): Easy Setup
Requirements: Same hardware (vendor and architecture) Logged databases Same storage paths on each machine Backup the primary system ontape –s –L 0 Set the type of the primary onmode –d primary <secondary_server_name> Restore the backup on the secondary ontape -p Change the type of the secondary onmode –d secondary <primary_server_name> DONE! See IBM Informix Dynamic Server Administrator’s Guide, version (SC ), chapter 20. This slide is self explanatory. It is simple to setup and operate and provides the desired protection.
37
Remote Standalone Server (RSS)
Replication to Multiple Remote Secondary Nodes Extends HDR to include new type of secondary (RSS) Can have 0 to N RSS nodes Use Capacity Relief Web Applications / Reporting Ideal for disaster recovery Supports Secondary - RSS conversions (onmode) RSS can be used in combination with HDR secondary RSS can be promoted to HDR secondary HDR secondary can be converted into RSS Primary Node HDR Secondary Allows for simultaneous local and remote replication for HA Support read-write operations Simple online setup and use BENEFITS RSS #1 RSS #2 A remote secondary server is a complete copy of the primary server, similar to HDR. The major difference with HDR is that is only uses asynchronous replication. For this reason, it is not an appropriate disaster recovery solution. One benefit of RSS are that it provides support for multiple copies of the primary machine. These copies can be used as multi-level disaster recovery in conjunction with HDR since an RSS server can be promoted to an HDR secondary. If the primary server fails, the HDR secondary is promoted to primary and one RSS server can be promoted to HDR secondary. At this point the production system still has protection from site failure. Another benefit is that it adds very little overhead through the use of full-duplex communication through the server multiplexer that also supports multiple logical connection between servers over TCP/IP. The asynchronous replication also makes it easier to tolerate network delays that would make it unacceptable for HDR. Finally, an RSS server can be used to offload tasks such as reporting, data analysis, and so on, leaving more cycles to the primary for productions tasks. Similarities with HDR secondary node Receive logs from Primary Has its own set of disks to manage Primary performance does not affect RSS servers; and vice versa Created by performing a backup/restore of the instance Differences with HDR Can only be promoted to HDR secondary, not primary Can only be updated asynchronously Only manual failover supported Uses full duplex communication (SMX) with RSS nodes Does not support SYNC mode, not even for checkpoints Requires Index Page Logging be turned on
38
Remote Secondary Server (RSS): Easy Setup
Requirements: (similar to HDR) Same hardware (vendor and architecture) Logged databases Same storage paths on each machine Configuration: Primary: LOG_INDEX_BUILDS: Enable index page logging Dynamically: onmode –wf LOG_INDEX_BUILDS=1 Identify the RSS server on the primary onmode –d add rss <server_name> Backup the primary system ontape –s –L 0 Restore the backup on the secondary ontape -p Identify the primary on the RSS server onmode –d rss <primary_server_name> DONE! See IBM Informix Dynamic Server Administrator’s Guide, version (SC ), chapter 21. The setup of an RSS server is similar to the setup of an HDR secondary. The slide should be self explanatory.
39
onstat -g stq – Queues Buffer Queue Empty Queue Full Queue
Between client and ontape thread Full Queue between ontape and arcbackup1 thread Archive before image Queue Between sqlexec thread and arcbackup2 Before image is put into Queue while physical logging
40
Common Backup and Restore Problems
Storage Manager Timeout of onbar Error 131 Object not found Salvaging logs and getting wrong object Restore System appears to be hung because: The tape is done onstat -D shows no I/O Very little CPU activity System is clearing the physical and logical logs!!! Storage Space Check that the devices are linked properly KAIO only uses raw I/O Overlapping data Check Informix message log Clearing the physical and logical logs has started Cleared 2100 MB of the physical and logical logs in 612 seconds Restore improvement A message in the online log indicating this phase of the restore started and completed. The use of intelligent parallelism to clear all the logs in a single chunks with one thread. One disk clear thread per chunk.
41
Archive Validation For ontape backups, backup validation is done by standalone archecker utility For onbar, archecker backup validation is integrated with onbar --use the onbar -v option What is checked? Format of each page on the archive is checked Tape control pages are sanity checked Each table is checked ensuring all pages of the table exist on the archive Reserve page format is validated Each chunk free list is verified Table extents are checked for overlap (oncheck -pe)
42
Table-Level Point-In-Time-Restore
Provide the customer with the ability to extract a set of tables, a table or a portion of a table from a level 0 archive (and logical logs) to a user specified point in time The extracted data can be placed in a file (external table) or in a table on any database server listed in the sqlhosts file (local or remote database server) This is not a backup and restore method, but the ability to restore an archived table in situations --you’ve accidentally dropped a table, deleted rows from a table, or truncated a table For performance, think in terms of insert into ... select from an archived table archecker -tdvs -f schema_cmd.txt (for restore from ontape backup) archecker -bdvs -f schema_cmd.txt (for restore from onbar backup)
43
Tuning Informix Utilities Performance Tuning Bootcamp
Information Management Partner Technologies
44
Presenter: Kishore Chinda
ID:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.