Migration of files to Archive Center
Hello All,
We need to migrate 10TB of data from on-premise file share to OpenText ECM. In our environment, we have only OTCS (installed on Azure tenant), and all the records management retentions are built on this. Now we are planning to go with Archive Center (OTAC) to migrate this volume so that we avoid the load of data into OTCS. My question is,
- Can we use OTCS as a front-end system to declare all these files and use OTAC only for archiving these 10 TB files?
- Can we use only OTAC as a separate solution for migrating these files? If yes, how can we configure the retention here? Is there any front-end view for OTAC to view or search files for the end users just like OTCS?
Could you please help me on what would be the ideal solution to bring this much data into the system?
Comments
-
- OT Record management will only work if the File is known in OTCS which is a Dtree.DataID is meshed into Dversdata.Docid to ProviderData.Providerid.
- You can create a File using a leading application and just use OTAC in that case you will be using OTAC as the file provider (think of SAP Archive link ADK that has no connection to OTCS)
- When you use RM in OTCS and OTAC as the File provider just like the file's retention is based on OTCS RM rules it will pass the retention rule to the archive server using its APIs. For example if you have a File in livelink that has to be destroyed on a date say Dec25 2024 what happens is that CS's RM oscript code issues the purge and delete. It is recorded in a table so that CS can show the disposition and by default it will try three times in case the file is locked by something else and you have to take manual action.The same thing will happen if the file is stored in OTAC the delete command is issued to the archive server API in this case.
- Whether you use OTAC as the File provider or EFS or internal from a livelink point of view Oscript commands are doing it. You cannot do Livelink RM without livelink knowing about the file.
Apart from the technical aspects I would try to do some planning on the 10TB and see how much of these can be purged before bringing it to CS
1 -
Thank you, sir, for all this valuable information.
I need one more piece of info related to OTAC.
We have OTCS (22.3) installed in our environment and a separate SQL server for OTCS. If we install/configure an OTAC environment now and use the current OTCS as a front-end system, do we need to use the same SQL server or a separate SQL server for OTAC?
As mentioned, we plan to use the Archive Center (OTAC) to migrate this volume to prevent overloading OTCS with data.
0 -
OTCS(Binaries running in Windows/Unix/Cloud)+ application server(TC)/IIS+ any DB (Oracle,SQL,postgres,hana)
OTAC(Binaries running in Windows/Unix/Cloud)+ application server(TC)+ any DB (Oracle,SQL,postgres)
OTDS (Binaries running in Windows/Unix/Cloud)+ application server(TC)+ any DB (Oracle,SQL,postgres,)
OTCS database requirements are very high and sometimes its collation requirements may be different from OTAC's so with a license you may be able to cram the same SQlserver engine to run all three but that is not how most customers run.OTAC db is a teeny weeny database compared to OTCS.
In OTCS circles we refer to OTCS as the Brain and OTAC as a File system on Steroids(Braun)
OTCS communicates to OTAC using an archive link and theoretically, its database can be different as OTCS never talks to OTAC's DB directly.
As mentioned, we plan to use the Archive Center (OTAC) to migrate this volume to prevent overloading OTCS with data.
OTCS is not going to be bothered by 10 TB of data. The real question you need to ask is if this 10TB needs to be RM-managed. AFAIK RM for files is only possible if OTCS is involved which means the OTAC files will be inside OTCs/Livelink tables(only pointers not actual content). Whether you like it or not if those 10TB files cause many DataiDs to be created livelink will index it so it will be some load you like it or not and you will need to plan to expand the OTCS database accordingly if that was never intended in the first place.
You can also ask OT if there are other products that allow you to maintain OTAC with RM without OTCS involvement but I have seen only that before
1 -
To clarify, the size is 30 TB, not 10. If we choose OTAC, it will be hosted in Azure, and we will have to pay for the storage. Similarly, if we migrate to OTCS, we will need to expand the Azure cloud storage as well. Do you see any benefits of using OTAC in this scenario?
0 -
I don't want to sound like the Sales/Marketing of OT but OTAC has a host of features that a file system cannot match.In fact even though OTAC is made up of a File System and has a DB and some logic it is far superior to a regular file system. it looks like you have not researched either RM or OTAC in this case so do some research yourself before embarking on this . Most likely you will probably feel the need for OTAC when you start your 30 TB File system migration so you may be able to learn from mistakes as well.
I suggest you look for capabilities white paper from OT about OTAC (some clues never run out of space, encryption,WORM,security keys)
1
Categories
- All Categories
- 122 Developer Announcements
- 53 Articles
- 151 General Questions
- 147 Thrust Services
- 56 OpenText Hackathon
- 35 Developer Tools
- 20.6K Analytics
- 4.2K AppWorks
- 9K Extended ECM
- 917 Cloud Fax and Notifications
- 84 Digital Asset Management
- 9.4K Documentum
- 31 eDOCS
- 181 Exstream
- 39.8K TeamSite
- 1.7K Web Experience Management
- 8 XM Fax
- Follow Categories