
950 IBM Tivoli Storage Manager in a Clustered Environment
Figure 22-71 Schedule log file in RADON shows the start of the scheduled backup
5. While the client continues sending files to the server, we force SALVADOR to
fail. The following sequence occurs:
a. In the client, the connection is lost, just as we can see in Figure 22-72.
Figure 22-72 RADON loses its connection with the TSMSRV06 server
b. In the Veritas Cluster Manager console, SALVADOR goes down and
OTTAWA receives the resources.
c. When the Tivoli Storage Manager server instance resource is online (now
hosted by OTTAWA), the schedule restarts as shown on the activity log in
Figure 22-73.

Chapter 22. VERITAS Cluster Server and the IBM Tivoli Storage Manager Server 951
Figure 22-73 In the event log the scheduled backup is restarted
6. The backup ends, just as we can see in the schedule log file of RADON in
Figure 22-74.
Figure 22-74 Schedule log file in RADON shows the end of the scheduled backup
In Figure 22-74 the scheduled log file displays the event as failed with a
“return code = 12”. However, if we look at this file in detail, each volume was
backed up successfully, as we can see in Figure 22-75.

952 IBM Tivoli Storage Manager in a Clustered Environment
Figure 22-75 Every volume was successfully backed up by RADON
Results summary
The test results show that after a failure on the node that hosts the Tivoli Storage
Manager server instance, a scheduled backup started from one client is restarted
after the failover on the other node of the VCS.
In the event log, the schedule can display
failed instead of completed, with a
return code = 12, if the elapsed time since the first node lost the connection, is
too long. In any case, the incremental backup for each drive ends successfully.
22.10.3 Testing migration from disk storage pool to tape storage pool
Our third test is a server process: migration from disk storage pool to tape
storage pool.
Attention: The scheduled event can end as failed with return code = 12 or as
completed with return code = 8. It depends on the elapsed time until the
second node of the cluster brings the resource online. In both cases, however,
the backup completes successfully for each drive as we can see in
Figure 22-75.
Note: In the test we have just described, we used a disk storage pool as the
destination storage pool. We also tested using a tape storage pool as
destination and we got the same results. The only difference is that when the
Tivoli Storage Manager server is again up, the tape volume it was using on the
first node is unloaded from the drive and loaded again into the second drive,
and the client receives a “media wait” message while this process takes place.
After the tape volume is mounted the backup continues and ends
successfully.

Chapter 22. VERITAS Cluster Server and the IBM Tivoli Storage Manager Server 953
Objective
The objective of this test is to show what happens when a disk storage pool
migration process is started on the Tivoli Storage Manager server and the node
that hosts the server instance fails.
Activities
For this test, we perform these tasks:
1. We open the Veritas Cluster Manager console to check which node hosts the
Tivoli Storage Manager Service Group: OTTAWA.
2. We update the disk storage pool (SPD_BCK) high threshold migration to 0.
This forces migration of backup versions to its next storage pool, a tape
storage pool (SPT_BCK).
3. A process starts for the migration task and Tivoli Storage Manager prompts
the tape library to mount a tape volume. After some seconds the volume is
mounted as we show in Figure 22-76.
Figure 22-76 Migration task started as process 2 in the TSMSRV06 server
4. While migration is running, we force a failure on OTTAWA. At this time the
process has already migrated thousands of files, as we can see in
Figure 22-77.
Figure 22-77 Migration has already transferred 4124 files to the tape storage pool
Get IBM Tivoli Storage Manager in a Clustered Environment now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.