Discussions
Categories
Groups
Community Home
Categories
INTERNAL ENABLEMENT
POPULAR
THRUST SERVICES & TOOLS
CLOUD EDITIONS
Quick Links
MY LINKS
HELPFUL TIPS
Back to website
Home
Web CMS (TeamSite)
WSMP & DMZ
johnguilfoyle
Has anyone out there successfully configured WSMP with the web app on a box out in a DMZ? I've opened the ports (UDP 2637, TCP/IP 4000, 4001, 389, all bidirectional) and set the cluster to listen on 4000 & 4001 in the config manager, but I'm having no luck getting the web app to see the cluster. Tomcat is throwing "com.imanage.cms.exceptions.cmsconnection.CmsConnectionPoolNotAvailableException: CmsConnection pool is not available" -- pretty good indictation the two aren't communicating. I've tried this with two different firewalls, and both are allowing traffic between the machines -- I can telnet to ports 4000 and 4001 from the DMZ web app to the cms box inside the firewall, for example.
Is there a trick to this that's undocumented? Support has been thus far unable to troubleshoot the problem.
-John
Find more posts tagged with
Comments
JTNeville
I have worksitemp running as an extranet and in many different configurations.
For example the main CMS and SQL server are in our colo. We have satellite offices all over, US, UK etc... I've found the best way to keep my data centralized but the performance robust is to seed my satellite offices with dumb web servers that look to the Tomcat server in the colo. IIS only needs port 8009 open to run IIS locally and talk to a Tomcat box at a remote site.
In your particular case, if you need to make the CMS talk across your firewall, I'd hazard from the limited information provided that your DNS doesn't have it's records correct so a box inside the firewall isn't able to resolve the correct public IP for the DMZ box. You can always test with HOST files if you don't like tinkering with DNS.
Second, on my PIX's I have to open traffic in both directions as well as specifying which ports can talk along those channels. Doesn't sound like your issue but thought I'd throw it out there anyway.
Good luck. Hope something above helps.
johnguilfoyle
Thanks for the quick reply. I think you're right on the money that it's a DNS issue -- a packet sniffer says that all the right traffic is indeed getting through the firewall.
Sorry for providing limited info the first time round -- a little more detail:
CMS box (inside the firewall) has name doc1, internal IP 192.168.1.11
Tomcat box (in the DMZ) has name web1, internal IP 192.168.1.10, public IP 68.111.45.77
The firewall has public IP 68.111.45.76
Pinging doc1 from web1 reveals it's fully qualified domain name -- doc1.namechangedtoprotecttheinnocent.com -- and the firewall's public IP
Pinging web 1 from doc1reveals it's fully qualified domain name -- web1.namechangedtoprotecttheinnocent.com -- and it's internal IP.
Networking is not my strong suit. Are there obvious changes I need to make to DNS or the respective Host files?
Thanks in advance,
-John
JTNeville
Do you want to push the traffic through the outside interface to the public IP (probably not) or directly to the dmz machine's private IP?
Normally, with my Pix, I would have my DMZ boxes on a different IP range than my internal machines. In your case,
"Pinging doc1 from web1 reveals it's fully qualified domain name -- doc1.namechangedtoprotecttheinnocent.com -- and the firewall's public IP"
If this statement is true from the box inside the firewall, and doc1 is the box in the DMZ, then you need to add either a host file entry or a DNS entry to your private DNS (the one that keeps order behind the firewall) that ties the DMZ box name to the internal IP akin to: WEB1 192.168.1.****
If the statement is true on the DMZ box then you need to do the same type of thing but spec the DOC1 internal IP.
Hosts files are easy to work with but a pain for long term maintenance, I only use them to do testing normally.
I can't tell but are you multihoming your DMZ box or does the public IP sit on the firewall and just push traffic into the private IP? If the latter then host files/Private DNS entries from an internal DNS on each would work.
If you are using an internal DNS then it should have in its database only the names to private IP's and forward all all other requests to the external DNS. This way you can have web1 resolve internally to the private IP and yet when a query needs a public IP (like
www.web1domains.com
) it will hit the external DNS and get back the publlic IP's.
I hope that's not too confusing.
johnguilfoyle
Very helpful replies so far -- I think we're closing in on the heart of the issue.
To clarify further:
-Yes, ideally we'd push traffic on the internal IP
About the DMZ boxes being in a different IP range, because of the difficulties I've been having I've actually tried this in two different configurations.
Configuration A
doc1 is the cms box -- has an internal IP: 192.168.1.11
web1 is the Tomcat box -- has an internal IP: 192.168.1.10, and a public IP: 68.111.45.77
The firewall also has a public IP: 68.111.45.76
To reach any machine behind the firewall, NAT and port-forwarding are uses to direct the traffic the right way. For example, if a message is sent to 68.111.45.77:2637, it is supposed to be routed to 192.168.1.11:2637 -- doc1, where the cms service resides.
The host file on web1 associates doc1 with the firewall public IP of 68.111.45.76 -- the 192 network is unreachable from outside the firewall.
No luck in this scenario.
Configuration B
The domain is 159.x.x.x
We have an actual DMZ network -- 172.x.x.x
usden7stagvdwsc is the cms server, it's on the domain at 159.21.66.228
usden7stagext1 is Tomcat, it's on the DMZ network at 172.28.252.4
usden7stagext1 also has a public IP of 12.41.52.83, and can be reached at mwhtag01.mwhglobal.com
In this configuration, I have a host file entry set on doc1 identifying web1 as the 172 address rather than the 12 address. Like so:
172.28.252.4 usden7stagext1
Here are the results of some pertinent pings:
From usden7stagvdwsc
ping 172.28.252.4 = 172.28.252.4
ping usden7stagext1 = usden7stagext1 [172.28.252.4]
ping mwhtag01.mwhglobal.com = mwhtag01.mwhglobal.com [172.28.252.4]
ping mwhtag01 = mwhtag01.mwhglobal.com [172.28.252.4]
From usden7stagext1
ping usden7stagvdwsc = usden7stagvdwsc.mwamericas.mwhglobal.com [159.21.66.228]
ping usden7stagvdwsc.mwamericas.mwhglobal.com = usden7stagvdwsc.mwamericas.mwhglobal.com [159.21.66.228]
ping 159.21.66.228 = 159.21.66.228
What else? The name of the cluster is the name of the cms machine -- usden7stagvdwsc
And no, I'm not responsible for the naming conventions.
In any case, no luck here either. Same error about not getting a cms connection pool not being available.
Any insight on what's wrong in either configuration?
Thanks again for the help so far.
-John
JTNeville
I think we should level the field here by clarifying a few common assumptions. Some are pretty basic but I find it always helps to declare everything when troubleshooting an issue.
0) **The operating system and MP system passwords must be the same on all servers in the cluster if it's a peer environment and not a domain**
1) All servers in the MP cluster need to have "impm" or "impmservice" running on them.
2) All servers need to have a host name and an IP that is correctly registered in DNS or via a local host file.
3) You can have multiple CMS's running on different servers in the cluster.
4) The cluster name is almost always the name of first server to execute the MP CMS service.
-->5) The web server does not require any MP software loaded on it.* Specifically it doesn't require "impm" or "impmservice" running on it.
-->6) The Tomcat server only requires java 1.4.2_03 and the MP webapp installation on it. This can be located on the same server as #3. It also doesn't require "impm" or "impmservice" running on it.
6a) If the Tomcat server isn't installed on the same server as the webserver application, then the worksite.properties file needs to be edited to point to the IP or common name of the web server **AND** you must make the IIS/TOMCAT integration modifications including the regedit **AS WELL** as copying your existing executed Tomcat home directory (meaning the war file has been unpacked, i.e. you have run Tomcat at least once successfully since the install of the webapp) to the IIS server (maintaining path integrity).
-----------------------------Networking----------------------------
7) In instances of multiple subnets, the gateways and routing are working and has been verified by attaching to server services across domains by both UNC name and IP **AND** below and above port 1024.
8) If a firewall is in use the constraints, policies and dmz spaces have been checked for conflicts and as in #7 routing amongst the different private spaces has been checked and proven working.
Ok, from what I see in your second case, configuration b (the first configuration appears to try to route from the inside out the firewall and then back in through the firewall to the dmz host, which might work but in my experience is always excessively flaky even on a robust Pix firewall implementation.
Your pings from ..wsc to ..ext1 show that basic routing is occurring between the private network (159.xx) to the dmz (172.xx) but what policies are on the firewall to control routing from the private network to the dmz and back?
Referencing 5 and 6 above, is there some internal need as to why you are running worksiteMPserver on the Tomcat server? You don't need it there unless you are using that server to do doubleduty and run some of the other low overhead MP services likes james or notification. All you need to do after installing Tomcat and the webapp is open the config.jsp page in a web browser and add your cluster name and the libraries and make sure that the Tomcat server can successfully resolve the UNC names to the proper IP (in your case the private ones).
I'll stop here so as not to put too much on deck at once and potentially confuse the issue more.
johnguilfoyle
Alright, allow me to respond in-line to these. Most of this is basic stuff; I've got several MP solutions up and running in environments where everything is on one domain / network.
So:
> "0) **The operating system and MP system passwords must be the same on all servers in the cluster if it's a peer environment and not a domain**"
In both cases for me, there's only one server in the cluster.
> "1) All servers in the MP cluster need to have "impm" or "impmservice" running on them."
Roger.
> "2) All servers need to have a host name and an IP that is correctly registered in DNS or via a local host file."
I believe this is the case, but it's tough to verify because in both configurations I describe DNS is hosted off-site.
> "3) You can have multiple CMS's running on different servers in the cluster."
Sure.
> "4) The cluster name is almost always the name of first server to execute the MP CMS service."
And that's how I always configure clusters as well.
> "-->5) The web server does not require any MP software loaded on it.* Specifically it doesn't require "impm" or "impmservice" running on it."
Check.
> "-->6) The Tomcat server only requires java 1.4.2_03 and the MP webapp installation on it. This can be located on the same server as #3. It also doesn't require "impm" or "impmservice" running on it."
Yup. Running JSDK 1.4.2_03 and the version of Tomcat that comes bundled with the 4.1 web app. 4.1.30, I think.
> "6a) If the Tomcat server isn't installed on the same server as the webserver application, then the worksite.properties file needs to be edited to point to the IP or common name of the web server **AND** you must make the IIS/TOMCAT integration modifications including the regedit **AS WELL** as copying your existing executed Tomcat home directory (meaning the war file has been unpacked, i.e. you have run Tomcat at least once successfully since the install of the webapp) to the IIS server (maintaining path integrity)."
For the time being, I'm having the Apache/Tomcat bundle act as both web and app server. One level less of complexity to deal with for the time being... sidebar: do you find that having an IIS front end helps with performance? The systems I've deployed so far have such light user loads that Tomcat alone has shown good performance.
> "-----------------------------Networking----------------------------
7) In instances of multiple subnets, the gateways and routing are working and has been verified by attaching to server services across domains by both UNC name and IP **AND** below and above port 1024."
This I cannot verify. It's proving very difficult to trouble shoot since I ask the respective network groups, "Are you sure the traffic is moving between networks and machines as it should?" and I get a simple, "Yes."
This bears further investigation on my part, though.
> "8) If a firewall is in use the constraints, policies and dmz spaces have been checked for conflicts and as in #7 routing amongst the different private spaces has been checked and proven working."
Same as above. I unfortunately don't have direct control over the firewall for Config B where the multiple subnets are present; in Config A where the web machine is simply outside the firewall while remaing on the single internal network, I do have control of the firewall and have (I believe) confirmed that the correct policies are in place. To verify this, I actually opened up the entire firewall to allow all traffic and still witnessed the same can't find cms connection pool error.
> "Ok, from what I see in your second case, configuration b (the first configuration appears to try to route from the inside out the firewall and then back in through the firewall to the dmz host, which might work but in my experience is always excessively flaky even on a robust Pix firewall implementation."
The first config is a situation where I'm setting up a second web app for external access. I already have an internal web app (running on the cms machine, doc1) that is performing well. To provide access over the Internet, I'm setting up a second web/app server -- on web1. What would the better way be to model the network? Stick web1 on it's oun subnet, like I've doen in config 2?
> Your pings from ..wsc to ..ext1 show that basic routing is occurring between the private network (159.xx) to the dmz (172.xx) but what policies are on the firewall to control routing from the private network to the dmz and back?
All traffic is allowed from the private network to the DMZ network... from the DMZ to private network, connections are allowed only on UDP port 2637 and TCP/IP ports 389, 4000 and 4001 -- and then, only to the cms server, ..wsc.
> Referencing 5 and 6 above, is there some internal need as to why you are running worksiteMPserver on the Tomcat server? You don't need it there unless you are using that server to do doubleduty and run some of the other low overhead MP services likes james or notification. All you need to do after installing Tomcat and the webapp is open the config.jsp page in a web browser and add your cluster name and the libraries and make sure that the Tomcat server can successfully resolve the UNC names to the proper IP (in your case the private ones).
I'm not sure how you got the impression that I'm running Tomcat on a server that also has various wsmp services running on it; I'm not.
In config 1, doc1 has all the wsmp services. web1 has Tomcat.
In config 2, ..wsc has all the wsmp services. ..ext1 has Tomcat.
And yep, I've configured things normally in the config.jsp page. Is there perhaps an issue with name resolution since the name of the cluster isn't fully qualified? For example, the cluster name is doc1, but the fully qualified machine name is doc1.domain.com? In config.jsp, I simply use the cluster name.
> I'll stop here so as not to put too much on deck at once and potentially confuse the issue more.
I think I'm on board with everything you've said so far, and I really do appreciate the help. Support unfortunately seems baffled by this issue as well, which is putting me in a tough spot.
Off to pester the network group about policies between the two subnets.
-John
JTNeville
Can you post the exact CMS error from the log with the CMS server?
The reason I assumed you had the multiple servers is because the connection pool error is normally associated with the SQL/CMS boxes not being able talk. I can't think of a non-password issue that would make that error occur on a single server setup. But keep feeding us info and I'm sure we can help suss it out.
If you have only a single server, and it works inside the firewall - hosting all services - and then when you run the webapp in the DMZ (best thing would be to load java on the DMZ box, and copy your existing Tomcat install over and start it up) and it fails leads me back to the idea it's a DNS (can't resolve the name properly) or firewall issue (gets the name ok but packets go into the void). I might see more if you can share the exact error message and log file name.
johnguilfoyle
Find attached a text file containing the error I get from Tomcat and two web app logs with some interesting detail -- worksitemp.log and cms.worksitemp.usden7stagext1.log.
The CMS logs on the wsmp server don't even register that the web app in the DMZ is trying to reach them -- which is why I originally pegged this as a firewall problem. I actually think DNS is the issue, but with the host entries I've got now I'm not really understanding how.
"If you have only a single server, and it works inside the firewall - hosting all services - and then when you run the webapp in the DMZ (best thing would be to load java on the DMZ box, and copy your existing Tomcat install over and start it up) and it fails leads me back to the idea it's a DNS (can't resolve the name properly) or firewall issue (gets the name ok but packets go into the void). I might see more if you can share the exact error message and log file name."
Exactly. The cms server on ..wsc is functioning normally with a web app that's inside the firewall on the domain. The web app is actually on a second box inside the domain, but the important note is that it works fine.
I'm about to set up a web server on the ..wsc, the cms box, where I'll host pages on the ports in question and see if I can hit them from ..ext1 through the firewall. That will verify TCP/IP at least, though I'm not sure how to verify the"discovery" UDP 2637 port. One interesting thing I noticed in the cms.worksite.* log is this:
Thu Dec 09 08:58:48.296: WARN [CmsConnectionMonitor] warn: exception= Failed to execute message: CmsGetChallenge
Return code=700
Error msg=Desired protocol version is not suppported
Library=dev
Server=USDEN7STAGVDWSC
Protocol not supported leads me to believe that perhaps UDP isn't turned on for port 2637, though I've been told repeatedly that it is. Hmm.
-John
JTNeville
Yeah kinda the long way to get back to what is at heart of the issue, but it does indeed look like a DNA/firewall issue. A good test would be to pop the DMZ box to the inside and fire up the webapp if it works, then you can clearly show the network admin that the firewall is eating packets.
johnguilfoyle
Finally, progress.
Remember config A and config B? Config A isn't working because it's an old firewall using PAT instead of NAT -- we think. Looking into replacing the firewall now. Config B was something stupid on my part. The server I was trying to hit against was 4.0, not 4.1. Evidently the 4.1 web app doesn't like the 4.0 cms server...
Thanks for all the time spent. I'll report back on the resolution of the PAT/NAT issue.
-John
JTNeville
np glad you figured it out. If you do ever suss out a specific list of ports required beyond the obvious (http 80, 8080, TCP ports 135-139 and 445, and UDP ports 135-139, TCP 8009 for Tomcat, 1433 SQL etc...), I think everyone would appreciate a summary of what you learned in the end.
Edited by JTNeville on 12/10/04 10:58 AM (server time).
johnguilfoyle
Finally solved the problem -- turns out it was indeed the firewall (an older Sonicwall). Rather than doing NAT, it was instead doing -port- address translation. After upgrading the firewall, all is well. So the 'standard' list of ports if fine from what I can tell.
Thanks for the help troubleshooting, and Happy New Year to all.
-John