Skip to main content

Cloud

An IBM WebSphere Application Server Architecture Using AWS

Target audience: Mid to Highly skilled Linux, WAS and AWS admins.

WebSphere Application Server (WAS) and WAS based Products have built-in capacity to be highly available(HA) and auto scale. This article gives one of many potential WAS Architecture designs that utilize AWS components.

The picture below depicts a WAS Cell with an HA WAS deployment Manager(Dmgr) and 1…N WAS nodes in a horizontal cluster. Parts of the HA design will use WAS built-in feature and the rest will utilize the AWS infrastructure components.

Again, this is a high-level design and the details will need to be worked out with a specific client. Internet domains, SSL/TLS and certs are not covered. These items will also need to be incorporated to the client’s specifications.

WAS Cell with an HA WAS deployment Manager(Dmgr)

 

 

The process to create this WAS HA Cell using AWS.

Create the WAS Cell on AWS:

  1. Create a VPC for WAS that has a public subnet, Internet Gateway, Security Group, Network ACLs and Routing Table. These AWS components will be created and configured to a client’s specifications. The Security Group and Network ACLs should include WAS port range and should limit who can get to the WAS Cell from the internet.
  2. Create an instance of one of the AWS AMI’s that meets the client’s virtual machine specfications.
  3. Upload the WAS software on it.
  4. Install WAS, No profiles.
  5. Then save that AMI as a custom WAS base AMI. This will be the AMI you select as you model the WAS HA Cell.
  6. Create an AWS EFS file system and mount it to the current AMI instance.
  7. Set up the /etc/fstab so the EFS file system mounts on bootup.
  8. Create a WAS Dmgr profile and put it on the EFS drive.
  9. Start the Dmgr.
  10. Create a WAS cluster with zero members.
  11. Now write a scripts.
    1. The script will execute at bootup. Call it Dmgr_startup.
    2. It will start the Dmgr.
    3. Put Dmgr_startup script in /etc/init.d
    4. chmod 755 /etc/init.d/Dmgr_startup
    5. chkconfig –add Dmgr_startup
    6. chkconfig Dmgr_startup on
    7. Check it: chkconfig –list Dmgr_startup
  12. Now save this instance as your WAS Dmgr AMI.
  13. Now delete the Dmgr instance.
  14. Now create a Launch Configuration/Auto Scaling Group and use the WAS Dmgr AMI for the instance.
  15. The auto scaling group will be a scaling group of 1.
  16. Start the Launch Configuration
  17. Now create an Elastic Load Balance for the Dmgr
  18. Add the Dmgr Instance and have it monitor the WAS Dmgr admin port or the WAS dmgr soap port of the Dmgr instance. These port numbers will vary if the client chooses not to use standard WAS ports.
  19. Modify the Auto Scaling Group to use the Elastic Load Balancer as its health check.
  20. Test your configuration by using a Web Browser to hit the ELB IP and admin port of the dmgr(WAS admin console).

At this point you will have:

  1. The Elastic Load Balance IP address for the Dmgr.
  2. The Dmgr WAS soap port.
  3. The Dmgr WAS cluster name.

Create the WAS Node(s) on AWS:

  1. Create another EC2 instance using the WAS base AMI put it in the WAS VPC.
  2. Now write two scripts(combo of shell and jython using the information you have from the Dmgr instance). Use a naming convention that adds the short host name to the profile, node and server names:
    1. The first script will execute at bootup. Call it WAS_startup.
      1. It will create a was node profile.
      2. It will federate the node to the Dmgr
      3. It will create a WAS server and add it to the WAS cluster.
      4. Put WAS_startup script in /etc/init.d
      5. chmod 755 /etc/init.d/WAS_startup
      6. chkconfig –add WAS_startup
      7. chkconfig WAS_startup on
      8. Check it: chkconfig –list WAS_startup
    2. The second script will execute at shutdown
      1. It will delete the WAS server
      2. It will un-federate the node from the Dmgr
      3. It will chkconfig WAS_startup off
      4. Put WAS_shutdown in /lib/systemd/system-shutdown/
      5. Note: this location may vary with different flavors of UNIX/LINUX.
      6. chmod 755 /lib/systemd/system-shutdown/WAS_shutdown
  3. Now save this instance as your WAS Nodes AMI.
  4. Restart the instance to see these scripts work.
  5. Upon reboot, In the WAS admin console you should see a new node federated and a new server added to the Cluster.
  6. Upon shutdown, you should see the node and server removed from the WAS admin console.
  7. Delete the instance.
  8. If the results were as expected in bullets 5 and 6 then move on. Otherwise go back to step 1 and debug your scripts.
  9. Now create a Launch Configuration/Auto Scaling Group and use the WAS Nodes AMI for the instance.
  10. The auto scaling group will be a scaling group of 1 … N. The group should cross Availability Zones(AZ). Select a metric that will allow the group to grow and shrink as the load changes.
  11. Start the Launch Configuration.
  12. Now create an Elastic Load Balancer for the WAS nodes and have it monitor a metric that will allow the group to grow and shrink as the load changes.
  13. Modify the Auto Scaling Group to use the Elastic Load Balancer as its health check.
  14. Test your configuration by using a Web Browser to hit the WAS Nodes ELB IP. Use a load generation tool to increase and decrease web traffic.

Thoughts on “An IBM WebSphere Application Server Architecture Using AWS”

  1. Can you please share the sample scripts as well for creating the Nodes and federate it with the Dmgr?

  2. Chuck Misuraca Post author

    No I cannot. The article is targeted to WAS consultants. Any artifact associated with this article are the property of Perficient.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Chuck Misuraca, Technical Architect

More from this Author

Categories
Follow Us
TwitterLinkedinFacebookYoutubeInstagram