Documentum migration from one platform to another

Just wondering if anyone has done any migration from one platform to another recently.

Source:
Content Server 7.3 running on AIX/DB2

Target:
Content Server 16.4 running on Windows/SQL Server

I was wondering if we could use the "docbase cloning" approach. Trying to see if we can do this migration without a 3rd party migration tool. Any help would be greatly appreciated!

Comments

  • Yes you can but it will take some reverse engineering to figure out the differences between an AIX/DB2 installation and a Windows/SQL Server one, e.g. dm_location, dm_plugin, dm_method objects, etc. After cloning, you will need to patch all these things before attempting to start the Content Server. Start with creating a new docbase on Windows/SQL Server and compare with your AIX/DB2 docbase.

  • The biggest concern I would have is the migration of the data from DB2 to SQL server, since each db has its own schema and potentially different implementations of views and other stored procedures that the Content Server uses differently for each db. I haven't don't this, so I cant say for sure if this really problematic or not. Migrating from one OS to another is less complex, since Opentext has built different installation binaries for each OS.

  • Interesting - We ant to move from Oracle to SQL Server as well for Azure compatibility.
    Any news on the subject?

  • Truly you can however it will take some figuring out to make sense of the contrasts between an AIX/DB2 establishment and a Windows/SQL Server one, for example dm_location, dm_plugin, dm_method objects, and so forth. In the wake of cloning, you should fix every one of these things before endeavoring to begin the Content Server. Begin with making another doc base on Windows/SQL Server and contrast and your AIX/DB2 doc base.

  • Just wondering if anyone is doing a cross platform migration/upgrade.

  • edited August 6, 2019

    I have used this approach recently to change platforms from windows/oracle 6.7 to windows/sql server 7.3. This is more of a data migration than a clone, but it worked very well.
    Create new database and perform brand new install of content server and run repository configuration program to establish new repository.
    I was able to create DARs in Composer from the source system to re-deploy configurations (e.g. object types, custom jobs/methods, WebTop presets) into the new repository using DAR Deployer. You have to rebuild any custom registered tables, but that's pretty easy with SQL.
    Now you're "cloned" and ready for data migration:
    Use dump utility from the source system, but do NOT use the full docbase option. For example, I used a dql predicate to dump cabinet by cabinet where our data was stored.
    Use Load utility in the target system to load all the data objects (e.g. documents, folders, etc). Use the relocate = T option; I also used the generate_event = F for my case.
    There is still documentation in admin guide on dump and load utilities and they still work (at least up to 7.3). Just create the API dump script; copy over the output dump file; create the API load script.

    Dump/load does all versions. There is one thing to consider: it also brings over all users/groups/ACLs related to the objects. For my purpose, that was actually desired, but there are others where it might not be. I've also used this approach for divestiture-related data migrations where the target company also deployed Documentum; in that case it was not desired, but it wasn't too much effort to go in and clean out all the obsolete acls, groups, and users after all the loads were complete (in that order, using UserRename and GroupRename jobs for those). Also, it does not do any differencing or uniqueness checking. If you run load twice on the same dump file, it will bring in duplicate documents (but not groups, users, or acls), so be careful with that.

    Sample for getting our templates migrated:

    create,c,dm_dump_record
    set,c,l,file_name
    G:\documentum\dump\07-US-TEMPLATES.dmp
    set,c,l,include_content
    T
    append,c,l,dump_parameter
    compress_content=T
    append,c,l,dump_parameter
    restartable=T
    append,c,l,dump_parameter
    cache_size=10
    append,c,l,type
    dm_sysobject
    append,c,l,predicate
    folder('/Templates',descend)
    save,c,l
    getmessage,c
    dump,c,l

    create,c,dm_load_record
    set,c,l,file_name
    H:\documentum\load\07-US-TEMPLATES.dmp
    set,c,l,relocate
    T
    append,c,l,load_parameter
    generate_event=F
    save,c,l
    getmessage,c
    dump,c,l

  • Steven, how big was your dump file? In the past, we used to dump/load as well, but we encountered issues when the dump file was over 1GB.

  • edited August 6, 2019

    Definitely want to use the compress option in the dump. I don't recall exactly (and they're now gone), but I'm pretty sure they were larger than 1 GB. I used this approach for approx. 1 TB total, but it was done in multiple batches (by plant site, around 20, which each had its own folder in the repository). I just checked my tracking info, and the largest original uncompressed batch was 152 GB of content. Even compressed that still had to be many GB.

  • edited August 6, 2019

    Good to know that they resolved the size issue. I would be curious if dump/load is still officially supported with 16.4/16.5

  • Steven,

    Thanks for your inputs. I'm glad to hear that the dump and load utility worked in your case. In our case, there are 2 unique challenges. We would like to retain the object IDs in the target. Also, we have 100's of terra bytes of content. So I am not sure if I can follow this approach

  • @bthomas said:
    Steven,

    Thanks for your inputs. I'm glad to hear that the dump and load utility worked in your case. In our case, there are 2 unique challenges. We would like to retain the object IDs in the target. Also, we have 100's of terra bytes of content. So I am not sure if I can follow this approach

    Any luck with this? We do have the same req where we would like to retain the objectID since we have many active running workflows.

  • You can't preserve the object IDs with dump and load. Therefore, you have to go with the cloning approach.

  • Might want to look into a newer load parameter. I've always set it to T; but wonder what F does? Hard to find documentation.
    set,c,l,relocate
    T

    But, I don't see how you could load object id's from one repository into another non-cloned one. Part of the object id is the docbase id.

  • Oh, I think it only works if loading objects back into same repository. From admin guide:

    Refreshing repository objects from a dump file
    Generally, when you load objects into a repository, the operation does not overwrite any existing
    objects in the repository. However, in two situations overwriting an existing object is the desired
    behavior:
    • When replicating content between distributed storage areas
    • When restoring archived content
    In both situations, the content object that you are loading into the repository could already exist. To
    accommodate these instances, the load record object has a relocate property. The relocate property is
    a Boolean property that controls whether the load operation assigns new object IDs to the objects it
    is loading.
    The type and predicate properties are for internal use and cannot be used to load documents of a
    certain type.

  • No further luck...we changed our approach. We decided not to keep the object IDs the same. It was a hard choice to make.The only option that will meet this requirement is EMA

  • We did something similar (a long time ago) and we copied the r_object_id to custom attribute in the source, did a dump and load (set to F), and then modified our app to query this custom attribute if the object id in target does not exist.

  • @DCTM_Guru said:
    We did something similar (a long time ago) and we copied the r_object_id to custom attribute in the source, did a dump and load (set to F), and then modified our app to query this custom attribute if the object id in target does not exist.

    This doesn't works in our scenario as we need to migrate runtime workflows and they are in huge numbers.

  • How many rows and columns are we talking here? If its a "manageable" no you should be able to do an "UPDATE". That's what we are planning to do.

  • @UGSP - yes, my approach wouldn't work for WFs. I was purely talking about document IDs being different. Good luck.

Sign In or Register to comment.