Has anyone tried to create a 2nd Content Server for a single repository using the CFS utility?
Yes, it works for us with 6.0 SP1 Content Server version and on a different machine, "data" folder is mounted on the same path, but the binaries are not shared.
Yes - it worked fine for me as well. CS 6.0 SP1 on RHEL. The content storage area (/documentum/data) was served up via NFS/SAN mount. The CFS installer copies across everything that you'll need: server.ini, aek.key, startup/shutdown script, creates a new server config object, etc.
We noticed some differences between the original server_config and the 2nd one created from CFS as listed below, wonder if you experienced the same:
owner_name: schema owner vs. installation ownerprojection_targets projection_ports projection_proxval projection_notes projection_enable wf_agent_worker_threads :r_creator_name: schema owner vs. installation owner
However, the host_name both point to the original host server.
Other than those:
- the startup and shutdown scripts for the 2nd server has a _<service> at the end, which I believe came from one step during the installation when the docbase service name was provided. :
- The name of the docbase log is different on the 2nd server as well.
Thanks.
forgive my ignorance but what is CFS?
Thanks,
-Mark
CFS means Content-File Server. Look at Content_Server_6_installation.pdf Page 104.
Chenje: Yes, there are such minor differences. For example you have to set up the server.ini files correctly, to let the each content server project to it's local docbroker first, and after that project to another docbroker (proximity values are lowest for the "local" and greater for "distant" docbrokers). Besides that there was a bug, that the generated start up script (dm_start_<servicename>) haven't got the NLS_LANG environment variable, thus our special characters were handled badly. Take care about this!
Mike
Thanks for the info.
As far as projection, right now, the original is projecting to the 2nd server with a proxval value of 2. But the 2nd server is not projecting to anything. Are you saying that they both need to have 2 projection_targets with the proxval reversed on each side?
Also, do you know the bug# of the _service for the startup and shutdown scripts?
Thank you.
I projected both repository processes to both docbrokers - this required manual editing. You should also update the dfc.properties for any client app (e.g. Webtop) to reference both docbrokers. If you're just load-balancing the repository then the proximity values aren't important - I think that you can set them to be the same value or just remove them.
Hi!
Yes, exactly, in the case that you also need load-balancing and failover configuration.
The issue is #168336.
We are setting this up mainly for both fail over and load balancing. If you project to both servers, will uses see 2 docbase entries at the login page? Also, this docbase is the governing docbase of a federation, are the member docbases going to see 2 governing?
Regarding bug #168336, is it possible to manually change the name of the startup/shutdown scripts?
No, there will be only one Repository. In the background the client will be connect to one of the content servers depending on one of the docbroker's will.
You can change the name of the startup scripts, it doesn't matter. But why would you do so?
I see. I'd like to keep the name of the scripts consistent on both sides to avoid confusion.
Also, I've heard that the agent_exec needs to be shutdown on the 2nd node, because it causes issues with the server jobs. Did you experience that?
AFAIK agent_exec_method cannot be shut down, as it is restarted by documentum process if it finds that it has died.
We not experienced problems with that, but if you are concerned that what job will run on which node, then set it up Documentum Administrator on the Job's info page.
The CFS installer should force all jobs to run on the primary content server (by updating target_server attribute on dm_job) thereby avoiding this issue. In a failover situation you would need to update this to reference the other content server.
The 2nd server node duplicated 5 jobs, with the a funny docbase name attached to the end:
dm_ContentWarningdm_LogPurgedm_ContentReplicationdm_DMCleannjdm_DMFilescan
The target_server was set to the 2nd node. I wonder if I should reset the target_server to the original node, or just simply delete these jobs?
If you think about it, these jobs do have sense. dm_logpurge removes the old log files, which are stored separately. The others have sense in configurations when you have different file stores for each content server (you could do that, depends on needs).
I've done in a few times. Before I found this thread, I start a discussion on High Availability in general for AIX in response to a question from the Word. The discussion, including the directions from the Support Note can be found in the discussion Documentum High Availability on AIX.
-Pie
Hi ukdavo,
I am trying to do this on hp-unix environment in which case the /var/content is NFS mounted on server 1. Now my question is do i have to install content server on the second server or directly run the CFS installer and proceed with the rest of the steps.
Please advice.
Thanks
Rajesh Mani