My client is planning to switch the vendor who is managing the datacenter where all Documentum servers are hosted. So effectively we have to rebuild all the systems, as there is no plan of shifting the actual hardware from datacenter 'A' to datacenter 'B'. Following are the current Production environment details.
- Documentum Content Server (5.3 SP4) running on AIX host. Both Docbase and Docbroker are running on same host.
- Documentum Webtop & DA (5.3 SP4) running on IBM WAS 6, again on AIX host.
- Oracle 10g Release 2 (10.2.0.2) Running on AIX host.
All the above 3 systems are running on 3 independent hosts.
The current application is used primarly to drive the customers business process automation and content management. So while moving the servers, the production system will have more than 500 running workflow (dm_workflow) instances at various stages. The requirement is to re-build the entire application in the new datacenter (Datacenter
, load the dump (Content and Metadata) to the newly build system with all the running workflows and lifecycles in the same state as it was in the datacentre A.
We have proposed docbase cloning kind of solution and planning to do a Proof of Concept on the below described approach using our Staging data. Please help me to validate the process on a high level and suggest if there is a better way to do it.
Backup from Current Production
- Shutdown the current production docbase and database instances.
- Take an export dump of the production docbase schema (database).
- Backup Content Server data store from $DOCUMENTUME/data/<docbase_name> directory in the Content Server AIX host. We are planning to create a tar ball (UNIX tar) of the entire directory structure using the tar command line utility. The expected size of the datastore is around 90-100 GB.
- Backup all content server customization present.
Restoration of above backup in new target
- Create new Docbase in the newly proposed production server host.
- Shutdown the newly created docbase.
- Drop the database schema of the newly created docbase.
- Restore the schema obtained in Step 2 (Export dump from existing Production) using the database import utility.
- Restore the backed up data (Created in Step 3) to the new datastore.
10. Update the backend tables like dm_mount_point_s, dm_server_config_s, dm_job_s, dm_location_s with the new hostname and directory path changes.
- Encrypt the new database password using the dm_encrypt_password utility and replace it in dbpasswd.txt
12. Start the new docbase.
Please help me to validate whether the above approach is fine or suggest if there is a better way to do it.
Also as the size of the content store is huge (100 GB), there is a plan to chunk it and get it to the datacenter B on an incremental approach. In this scenario, what is the best possible option to get the datastore on a weekly basis?
We are planning to make use of a combination of find, xargs and tar utilities in the following manner
find $DOCUMENTUM/data/<docbase_name> -mtime -7 | xargs tar rvf docbaseArchive.tar
Please suggest if there is any common practice of doing incremental backup of filesystem.