Hi,
We're in the process of replacing our existing S3 object storage with a different one. We have few billion object in the current S3 bucket and using a Oracle database with Documentum. All our objects are worm protected and objects are not modified at all.
Due to large amount of objects I was thinking about the following procedure:
1. Create bucket A in the new S3 storage and configure it to be used with Documentum for all future writes
2. Old S3 bucket B will be read only, no new writes to old S3 storage.'
3. Create bucket C in the new S3 storage and copy all data from old bucket B
4. Update Documentum/Oracle with new location information for the migrated objects (bucket B -> Bucket C). Not sure how to do this.. I would assume that this could be done via script or something.
Is something like this doable? we were told that migrate_content is not really suitable for huge number of objects. We need to minimize the potential downtime.
Is you probably figured it out already I'm a storage guy and I don't know too much about Documentum yet