<<

. 69
( 87 .)



>>

(sandbox5) and the data is stored remotely on a DB2 server (sandbox6). An-
other server is used for Deployment Manager (sandbox2). For the purpose
of this example, all servers are Windows 2000 Advance Servers; however,
any of the WebSphere-supported platforms can be used.




Figure 21-5 WebSphere Portal cluster.
P1: FCH/SPH P2: FCH/SPH QC: FCH/SPH T1: FCH
WY009-21 WY009-BenNatan-v1.cls May 13, 2004 22:25




Designing High Availability into Your Portal Server 413


For the purpose of this example, assume the following:
1. You have installed and con¬gured WebSphere Network Deployment
5.0.1 and WebSphere Application server 5.0.1 on sandbox2.
rigorconsultants.com
2. You have enabled security on Deployment Manager using single
sign-on and the authentication server is IBM Directory Server
sandbox5. Java 2 security needs to be disabled. Make sure that
wpsadmin has been set to an Administrator role in the Deployment
Manager Admin console; sandbox2.rigorconsultants.com:9090/
admin.
3. IBM Directory Server 5.1 has been installed and con¬gured on
sandbox5 and IBM Directory Client is installed on sandbox3 and
sandbox4. The latest ¬xpacks have been installed.
4. DB2 Enterprise 8.1 database has been installed and con¬gured on
sandbox6 and the DB2 client is installed on sandbox3 and sandbox4.
The latest ¬xpacks have been installed.
5. IBM HTTP Server 1.3.6 is installed and con¬gured on sandbox1.
6. For both sandbox3 and sandbox4 (which must be done after you set
up sandbox1, sandbox2, sandbox5, and sandbox6), you have done
the following:
a. Performed a full installation of WebSphere Portal 5.0. You cannot
install WebSphere Portal into an existing cluster environment. If
the node is within a cell, you must ¬rst remove it, install
WebSphere Portal, and then reinstall it.
b. Con¬gured both WebSphere Portal servers to use the remote Web
server sandbox1.
c. Con¬gured the WP servers to access the remote IBM Directory
Server sandbox5 and the remote DB2 server sandbox6.
d. Con¬gured the tables on the remote database server and exported
the data for sandbox4 WebSphere Portal to the DB2 server.
e. Set up wpsbind and wpsadmin in the LDAP server as admini-
strative users using the de¬nitions speci¬ed in the LDIF ¬le.
7. You have validated that each component is working by separately
testing each WebSphere Portal server, IBM Directory Server, the DB2
server, and the Network Deployment Manager server.
8. The date and time in Deployment Manager (sandbox2) and the nodes
(sandbox3, sandbox4) must be the same or within 5 minutes and
must be in the same time zone.
P1: FCH/SPH P2: FCH/SPH QC: FCH/SPH T1: FCH
WY009-21 WY009-BenNatan-v1.cls May 13, 2004 22:25




414 Chapter 21


9. WebSphere Network Manager is running on sandbox2 and all the
node names are unique in the cell.
10. Server1 and WebSphere Portal are running on sandbox3 and
sandbox4.
11. Maximum heap size has been increased to 512 to <was_root>\bin\
addnode.bat and <was_root>\bin\removenode.bat by
adding -Xmx512 to the Java command line.


Adding WebSphere Portal to a Cell
After you have set up your infrastructure, you need to add the WebSphere
Portal nodes to the cells by doing the following:
1. Click Start ➪ Programs ➪ IBM WebSphere ➪ Application Server V5.0
➪Start the Server.
2. Click Start ➪ Programs ➪ IBM WebSphere ➪ Portal Server V5.0 ➪Start
the Server.
3. Open a command prompt and go to <was_root>\bin and execute
addnode sandbox2.rigorconsultants.com 8879 -username
wpsbind -password wpsbind -includeapps. The deployment host
name is sandbox2.rigorconsultants.com, soup port is 8879,
and wpsbind is the Deployment Manager administrator username
and password. The -includeapps option is used for the ¬rst cell only
in order to install the applications into the Deployment Manager cell.
Upon completion (which will take some time) you will see the
message “node sandbox3 has been successfully federated.”
4. Repeat the process for sandbox4 except do no use the -includeapps
option in the addnode command.
Remember that when you add a node to a cell, all administration must be
done through the Deployment Manager Administration console instead of
the Portal Administration console. So you need to remove the Portal Admin-
istration console to avoid future problems with the plug-in con¬guration.
Do the following to remove the Portal Administration console:
1. Open the administrative console
http://sandbox2.rigorconsultants.com:9090/admin
2. Log in as the administrative user wpsbind.
3. Expand Applications on the Navigation menu.
4. Click Enterprise Applications.
5. Find WpsAdminconsole application and click Uninstall.
6. Save the changes to the master con¬guration.
P1: FCH/SPH P2: FCH/SPH QC: FCH/SPH T1: FCH
WY009-21 WY009-BenNatan-v1.cls May 13, 2004 22:25




Designing High Availability into Your Portal Server 415


Implementing the WebSphere Portal Cluster
Now that WebSphere Portal nodes have been added to the cell, you need
to create the cluster. Perform the following tasks:


1. Log in into the Deployment Manager Administrative console
http://sandbox2.rigorconsultants.com:9090/admin as
wpsbind.
2. Expand Servers and select Clusters. On the right side is the Create a
New Cluster window.
3. Enter the cluster name (we chose Portal Cluster) and check Prefer
Local Enabled and Create Replication Domain for this Cluster.
4. Assign a weight that determines how much workload is distributed
to that node relative to the other node. We recommend keeping an
equal weight or the default value.
5. Click Select an Existing Server to add to this cluster, choose the server
WebSphere Portal from the node sandbox3, and click Next.
6. Enter the name of the next cluster member. We called it
WebSphere Portal Sandbox4.
7. Assign a weight that determines how much workload is distributed
to that node relative to the other node. We recommend keeping an
equal weight or the default value.
8. Select the node sandbox4 and check Generate Unique HTTP Ports
and Create Replication Domain for this cluster. It is imperative that
you click Generate Unique HTTP Ports, otherwise there will be
con¬‚icts between the original servers and other servers on sandbox4.
9. Click Apply and the cluster member WebSphere Portal sandbox4
will appear on the application server list on the bottom of the
page.
10. Click Next and the summary window is displayed. Then click Finish.
11. Save the changes to the master con¬guration.
12. Log in remotely to sandbox3 and sandbox4 and change the
application server name to the cluster name so that the portlet
deployment can work. Open a command prompt and go to
<wp_root>\shared\app\config\services and edit
DeploymentService.properties. Change the value assigned to
wps.appserver.name to Portal Cluster, and save the ¬le.
13. Go back to the Deployment Manager Administrative console, expand
Servers, and click Clusters.
P1: FCH/SPH P2: FCH/SPH QC: FCH/SPH T1: FCH
WY009-21 WY009-BenNatan-v1.cls May 13, 2004 22:25




416 Chapter 21


14. Select the cluster Portal Cluster and click Start. Wait for some time.
15. Check if the cluster members have started by expanding Servers,
selecting Application Servers, and refreshing the Status view.

The next few steps relate to updating the WebSphere plug-in con¬g-
uration since the plug-in con¬guration now resides on the Deployment
Manager (sandbox2) instead of the nodes (sandbox3, sandbox4). The HTTP
Server host name and port must be added to the host alias list in order for
a clustered portal application to be accessible. (Remember, you are still in
the Deployment Administrative Console.)

16. Expand Environment and click Virtual Hosts.
17. Select default host, and click Host Aliases on the Additional
Properties table.
18. Click New and add sandbox1.rigorconsultants.com (our Web
Server) for the Host Alias and 80 for the Port Number.
19. Save the con¬guration.
20. Regenerate the plug-in expand Environment, select Update Web
Server Plug-in, and click OK.
21. Copy the plugin-cfg.xml ¬le located on the <nd_root>\
config\cells directory to <was_root>\config\cells on
sandbox1 (our Web server). Edit plugin-cfg.xml and change
<nd_root>\etc\plugin-key.kdb to <was_root>\
etc\plugin-key.kdb. Also change <nd_root>\etc\
plugin-key.sth to <was_root>\etc\plugin-key.sth. Save
the ¬les.

These last steps are to enable dynamic caching. Dynamic caching im-
proves performance by caching the output of dynamic servlets and Java
Server Pages. However, it is also required for clustered portal applications
because all nodes in a cluster share the same cache information.

22. Expand Servers, click Application Servers, and select
WebSphere Portal.
23. Select Dynamic Cache in the Additional Properties section and select
Enable service at server startup.
24. Select Enable cache replication and click on its link. Select the portal
cluster name (Portal Cluster) and the desired application server
name as the replicator (WebSphere Portal).
25. Ensure the runtime mode is Push only and click OK.
26. Save the changes to the master con¬guration.
P1: FCH/SPH P2: FCH/SPH QC: FCH/SPH T1: FCH
WY009-21 WY009-BenNatan-v1.cls May 13, 2004 22:25




Designing High Availability into Your Portal Server 417


27. Go to step 22 and repeat for application server Websphere
Portal Sandbox4.


Testing Your WebSphere Portal Cluster
Before you perform the next steps, you need to ensure that WebSphere Portal
cluster is working. You can do this by testing the fail-over functionality. This
is done as follows:

1. Log in into the Deployment Manager Administrative console
http://sandbox2.rigorconsultants.com:9090/admin as
wpsbind.
2. Expand Servers and click Clusters.
3. Select the cluster Portal Cluster and click Start. Wait for some time.
4. Check if the cluster members have started by expanding Servers,
selecting Application Servers, and refreshing the Status view.
5. Log in into WebSphere Portal http://sandbox1
.rigorconsultants.com/wps/portal using wpsadmin.
6. Select Administration page. If everything is ¬ne, go back to your
home page.
7. Pull the network cable out of a WebSphere Portal Node (sandbox3 or
sandbox4).
8. Select the Administration page. It should show up without any
problems. Repeat the test except plug-in the network cable and take
out the cable from the other node.
9. Select the Administration page again. If it appears then your
WebSphere Cluster has been successfully implemented.


Deploying Portlets into a WebSphere Portal Cluster
The next task you have to do to set up Deployment Manager to automati-
cally deploy an installed Portlet to all WebSphere Portals application server
in the cluster. Basically, what you do is install a portlet through WebSphere
Portal, and Deployment Manager will then synchronize all the other cluster
members. The following are the steps to do this:

1. Log into WebSphere Portal http://sandbox1.
rigorconsultants.com/wps/portal using wpsadmin.
2. Go to Administration ➪Portlets ➪Install.
3. Browse for the portlet WAR ¬le and Click Next.
P1: FCH/SPH P2: FCH/SPH QC: FCH/SPH T1: FCH
WY009-21 WY009-BenNatan-v1.cls May 13, 2004 22:25




418 Chapter 21


4. The portlets included in the WAR ¬le will be displayed. Click Next
and then click Install.
5. You will then get a message stating that your portlets were installed
but not activated.
6. Open the Deployment Manager Administrative console
http//sandbox2.rigorconsultants.com:9090/admin. You
need to manually synchronize all nodes. You can set the
autosynchronization on; however, it is recommended for portlets to
still manually resync them.
7. Expand System Administration and select Nodes.

<<

. 69
( 87 .)



>>