Best Of
Re: Download Documents using REST API and webreport
WebReport approach
Let's imagine you are searching for documents within a folder and want to enable the option to download them. First, you need to prepare the LiveReport, which will serve as the source for the WebReport. If, for example, you're searching within a folder with DataID=133428, you should use the following query:
SELECT name, dataid FROM dtree WHERE subtype=144 AND parentid=133428
Once you have the source ready, you can proceed to set up the WebReport. Use the "LLURL:DOWNLOAD" subtag to generate the code necessary for the download function. Below is the code for the WebReport:
[LL_WEBREPORT_STARTROW /]
<table>
<tr>
<td><a href="[LL_REPTAG_2 LLURL:DOWNLOAD /]">[LL_REPTAG_1 /]</a></td>
</tr>
</table>
[LL_WEBREPORT_ENDROW /]
The result would be a clickable list of documents.
ModuleSuite approach
If you have the ModuleSuite installed, you can create your own API to facilitate document downloads. The typical method involves creating a ContentScript in the "Content Script Volume:CSServices" folder. When correctly programmed, this script can be used to expose APIs.
Below is an example of how to implement this:
The relevant parts of the snippet are as follows:
- Line 1: The
downloadDocument
closure defines an API, accessible by all HTTP methods. - Line 2: The
path
parameter is extracted from theparams
map. - Lines 3 to 6: The script verifies that the path is defined; if not, it returns an error.
- Line 7: The node is retrieved by its path. It's assumed to be within the main Enterprise volume, although you could implement alternative logic to access different volumes.
- Lines 8 to 11: The script checks that the node exists and that the user has access to it; if not, it returns an error.
- Line 12: The underlying java.io.File, which is nested in the
content.content
variable of the node, is returned. - Lines 17 to 21: The ContentScript is configured to be exposed as an API.
Assuming the ContentScript is named document-util
, you can download the Enterprise:MyFolder:MyDocument
document using the following URL: /amcsapi/document-util/downloadDocument?path=MyFolder:MyDocument
Please note that this URL will function properly as long as you include a valid LLCookie
with your request to authenticate the logged-in user; hence, it is particularly suited for browser-based invocations.
If you prefer a completely programmatic approach, you should first obtain a valid token by authenticating through the standard API. Then, you can invoke the URL /amcsapi/v1/document-util/downloadDocument?path=MyFolder:MyDocument
, setting the token as the OTCSTicket
header.
Re: How to hide unused functions in xECM menu and enable Reminders
I just want to add a valid alternative that is simple and easily customizable even if you're not an Oscript programmer, which is to use the Module Suite that can be used also to customize an object's functions menu.
Module Suite Content Script can be used to perform changes to the standard object function menus, by adding new options or removing existing ones. This feature is enabled by defining a Content Script that “filters” the object menu and performs the desired modifications. The “amgui” service provides a user-friendly interface to perform modifications to the menu object.
The following example shows a menu customization script that includes:
- fetching the original menu
- filtering the original menu entries (removing entries that match a specific expression)
- adding a divider row to split menu entries
- adding a submenu
- adding a custom menu entry to the new submenu
- returning the modified menu
E.g.
Learning to deploy Documentum on Kubernetes- UNSUPPORTED
A tutorial using Docker Desktop Kubernetes
😉Special thanks to Jose for contributing this tutorial. It is unofficial and unsupported. But very good info to consider.
- Both files are needed, together, the PDF and the 7z.
- Slide 38 is WIP that might come in future versions. (Kubernetes Dashboard, metrics server, Kafka, RabbitMQ...).
Maybe provide input here. Hopefully this is something that can mature to a whitepaper…
Extract text from any file with Intelligent Viewing and the use in another service like GPT
The purpose of this article is to show and share knowledge on how easy it is to use Intelligent Viewing to to extract text from any file then use that text in another service like translation, summarization, sentiment analysis etc.
Our source file is a PDF brochure from Lotus with 8 pages, viewed below in Intelligent Viewing.
This example will extract all the text from all 8 pages. Once the text has been extracted it can of course be used in many different ways. In this example will will send to GPT for Summarization and then once summarized send to Google Translate for the summary text to be translated into our target language, French.
This example presumes we have a working Intelligent Viewing environment.
Using Postman and with the correct authentication token the following call will return all the existing publication details. We can see that the total count of publications is 63.
Once we have all the information, next we want to find the Publication ID on our specific document. Postman has a nice search feature enabling us to find the Publication ID of our published file. The publication ID in our case is:-
ecd0f23f-d9da-4706-ad01-57499edf37e9
By adding the PublicationID onto the previous call, we get returned all the detail regarding or published document, including the page count.
The full JSON Path to the pageCount value is as follows:-
pageCount=_embedded["pa:get_publication_artifacts"][1]._embedded["ac:get_artifact_content"].content.pageCount
Now we switch into Visual Studio Code.
Here we import the Python libraries we will use and check if the file into which we write our text, exists already or not.
Next we set the PubID of the file we will extract the text from.
Next with our publication ID we need to find the total number of pages so we can extract all the text on every page
Then using the following command on each page we can extract the text and write into extracted.txt
http://otiv-highlight/search/api/v1/publications/" + pubID + "/text?page=" + pageidx + "&textOnly=true"
Now we have extracted all the text we must first strip any unwanted characters like newline "\n" before sending to GPT for Summarization. Here we will ask for all the text to be summarized into 10 sentences.
The full text extraction from all 8 pages is a total of 773 words
The original text of 773 words then gets summarized down to 118 words
From here we can then send this off to a translation service for example.
Variable "cltext" holds our summary text and we will use this to pass to Google Translate with source and target parameters, the target in this case being French.
Here is the text now translated to French for example:-
In summary we started with an 8 page PDF document that we extracted all the text then sent that text for a 10 sentence summary and then translated the summary from English to French.
Many thanks,
Phil ****
Re: Is Cloud Fax and RightFax the same?
Hi,
RightFax does have an API but like you, I can't find any reference to it since the website redesign.
If you find something, please post a link, and I'll do the same.
Good luck
Re: Initiate Workflow from REST
A few months ago I traced initiating a workflow with the REST API and below are the steps I found. Some of the operations here made their way into the @kweli/cs-rest npm package, which you can read about on my blog.
Here's a rough outline of initiating a workflow with the REST API, which is greatly simplified.
POST: /api/v2/draftprocesses parameters: {workflow_id: map_id} - map_id is the dataid of the workflow map
From the response,
let draftprocess_id = results.draftprocess_id
Then,
GET /api/v1/forms/draftprocesses/update params: {draftprocess_id:draftprocess_id}
The response is workflowInfo
, which gives you the options for setting up the workflow. E.g.,
workflowInfo.data.instructions
- instructionsworkflowInfo.data.title
- titleworkflowInfo.data.authentication
- true/false, whether authentication is required (i.e., a password must be supplied with the call to initiate the WF)-
workflowInfo.data.attachments_on
- true/false, whether attachments are permitted -
workflowInfo.data.process_id
- process id (you'll need this later)
Workflow attributes are in workflowInfo.forms
, which contains an array with the forms defined on the "Smart View" tab of the Start step. I don't know if the "Initiate in Smart View" settings needs to be enabled for this to show up. The workflowInfo.forms[index].data
objects contains the form values that can be modified and submitted to update the workflow:
PUT /api/v2/draftprocesses/${draftprocess_id} params: { action:"formUpdate", values: formValues }
Attachments are a bit tricky since you need to extract the DataID of the attachments folder.
let data_packages = workflowInfo.data.data_packages let attachment_pkg = data_packages.find(pkg => pkg.type == 1 && pkg.sub_type == 1) let attachment_folder_id = attachment_pkg.data.attachment_folder_id
Then use the standard document upload API to upload attachments.
Finally, to initiate the workflow:
PUT /api/v2/draftprocesses/${draftprocess_id} params: { action: "Initiate", comment: comment, authentication_info: { password: password } }
The comment
and authentication_info
keys are only required if workflowInfo.data.comments_on
and workflowInfo.data.authentication
keys are true.
That's roughly how I did it. I hope it helps.
Re: Smart UI commands - implementing the open classic view example
Hello Hugh,
From what I understood, you have an existing Smart UI extension (let's call it hugh 😉) and want to add a new command in there, but it's not working. The command never shows.
I'm not sure what's wrong, but for a command to actually show somewhere, you have to do a few steps:
- Create your command code (copy from SDK or make your own)
- This means ****.command.js, which extends CommandModel
- Plus a ****.nodestable.toolitems.js file, (which you pass to the nodestable widget using hugh.extensions.json later)
//open.classic.nodestable.toolitems.js define(function () { 'use strict'; return { // "otherToolbar" here stands for one of the nodetables toolbars. // There's also an "inlineActionbar", which shows when you mouse over a row otherToolbar: [ { signature: 'OpenClassicCustom', // Must be same as the command signature name: 'Open Classic Page' } ] }; });
- Add the two .js files mentioned above into your src/bundles/hugh-all.js
// Placeholder for the build target file; the name must be the same, // include public modules from this component define([ 'hugh/commands/open.classic/open.classic.command', 'hugh/commands/open.classic/open.classic.nodestable.toolitems' ], {}); require([ 'require', 'css' ], function (require, css) { // Load the bundle-specific stylesheet css.styleLoad(require, 'hugh/bundles/hugh-all'); });
- Add the files into your src/hugh-extensions.json
{ "csui/models/server.module/server.module.collection": { "modules": { "hugh": { "version": "1.0" } } }, "csui/utils/commands": { "extensions": { "hugh": [ "mps/commands/open.classic/open.classic.command" ] } }, "csui/widgets/nodestable/toolbaritems": { "extensions": { "hugh": [ "mps/commands/open.classic/open.classic.nodestable.toolitems" ] } } }
- run grunt and copy the out-release contents into your OTCS/support/hugh folder on the CS location
- The command should now be showing inside the nodetable's top header (after you select an object using the checkbox in the leftmost column)
Can you check you got all these steps?
Other useful information:
- The open.classic.command has no enabled method, because it "inherits" it from CommandModel
- 'open.classic.command' extends 'csui/utils/commands/open.classic.page' which extends 'csui/models/command' which has the default enabled implementation hidden inside.
- It's actually possible to test and develop commands using the local server in the extension, but it takes some test/index.html setup
I've ran out of time, but I hope you have good luck with solving your problem.
I remember how happy I was when I finally managed to get the open.classic.command working for the first time 😅
Javascript Sample to demonstrate managing the lifecycle of your application
As a developer on OpenText Cloud Platform, the lifecycle of your application can be represented by the flow above. You deploy your application; when they sign-up you provision your subscriber; your subscriber adds users to the application and then they use the application. The Develop Application step is really just a special iterative case of all the other following steps and so this article will follow the flow from Deploy Application onwards.
OpenText has now provided an open source Javascript sample which implements this flow using the Developer Admin API (see here) and the sample can be downloaded from Github here. Additional information on working with Developer can be found here.
Managing all of this are a number of key concepts:
- An Organization is your business account within a Region
- The Org owner is a Developer who is authenticated with the OT platform using our OT Connect customer system
- The Developer creates multi-tenanted apps owned by the Organization
- Organization creates, owns and manages tenants for application subscribers
- A Subscriber’s tenant is subscribed to the developer’s application
API Authentication
For secure API access from your application to the platform to perform these management actions you will make use of two different Oauth security schemes or contexts:
- The organization security scheme is for APIs which manage Org owned entities – the entities you own – the Organization itself, your applications and your subscriber tenants
- The tenant security scheme is for APIs which manage Tenant owned entities – Tenant or Subscriber owned entities such as users, which applications they use and how they are authenticated AND the sandboxed information management service API calls made by users - e.g. to content storage, workflow, etc.
The Developer Admin API documentation here covers how to authenticate with the platform using these two security schemes. The sample application provides implementations of these two security schemes to gain the required access tokens for the proceeding API calls in the sample.
Retaining IDs
When performing the above lifecycle for real you will need to maintain key entity IDs returned in responses. You will retain these as references within your application database. These references are UUIDs generated by the platform. Examples are:
- applicationId
- tenantId
- authenticatorId
- userId
The sample webapp presents these ids in the response data within the UI from where they can be copy and pasted into subsequent requests within the webapp.
Deploy Application
Assuming you have successfully developed and tested your application, the first thing you’re going to need to do is deploy your application.
There are two steps to this:
- Creating the application
- Configuring the callback URL for your user authentication flows
All of this can be done via the Console UI as it is an infrequent activity but the sample app will show how you do it with API calls using the organization security scheme. Within the sample webapp, steps 1 & 2 are performed from the Organization page.
Provision Subscriber
Having deployed your application onto the platform – you now need to provision a subscriber to use the application – and you’ll do this each time you on-board a subscriber. Within the sample, steps 1 & 2 are also performed from the Organization page of the sample webapp.
There are three core steps to provisioning a subscriber:
- Provision subscriber sandbox – in OpenText terminology, the tenant
- A tenant provides full isolation between subscriber data
- Only tenant users can be given access to the tenant and its data
- Provision your application to the newly created subscriber tenant
- Users are given access to applications and their data via a tenant with which they authenticate
- Multiple applications can be provisioned to the same tenant
- Set-up external identity provider, which is your application or an IdP you application uses, for tenant users, in OpenText terminology, authenticators
The sample application shows how steps 1 and 2 are performed via API call within the organization security scheme. Within the sample, steps 1 & 2 are also performed from the Organization page of the sample webapp.
Step 3 is performed using an identity authenticated using the tenant security scheme. Within the sample, step 3 is performed from the Tenant page of the sample webapp. However, since most, if not all your users will use a custom rather than the built-in platform authenticator then you will create the authenticator before you create users, and so the order above represents the order you are likely to perform subscriber provisioning.
Add Subscriber Users to Application
There are two simple steps to this - and you'll do this each time you onboard a new user for a subscriber:
- Create the user within the tenant/sandbox
- Add them to the application
The sample application shows how steps 1 and 2 are performed via API call within the tenant security scheme. When the user is created the request body of user meta-data contains the authenticatorId of the external authenticator (IdP) configured for your application. Within the sample, steps 1 & 2 are performed from the Tenant page of the sample webapp.
Use the Application
Having done all of that - your users are now in a position to login and use your application – but in order for that to work there are a number of key requests each time a user logs-in:
- Authenticate the subscriber user using your identity provider, using the authenticator previously configured for the user. The sample shows how this can be done from a Single Page App (SPA) – a public or untrusted Oauth client - using Auth Code grant flow with PKCE
- Authenticate the user against IdP using their authenticator and obtain an Authorization Code
- Exchange the Auth Code for an Access Token to OT platform and get back other tokens in the response
- Call the required Information Management APIs as the user in the context of the subscriber’s sandbox/tenant using the Access Token
The sample web app shows how step 1a is performed via a browser request and step 1b via API call within the tenant security scheme. Step 2 is also uses the tenant security scheme using the access token obtained in step 1b. The sample webapp illustrates Step 2 through API calls to the CMS API to upload and download a JSON document.
Within the sample, steps 1 & 2 are performed from the User page of the sample webapp.