In a post last year, I provided an overview of an approach for implementing Oracle WebCenter in Amazon Web Services. I’ve decided to provide a bit more detail in the form of a three-part series covering the configuration of the Amazon Virtual Private Cloud (VPC), the Elastic Compute Cloud (EC2), and Route 53.
In this post, I’ll cover the VPC – A service used to logically isolate a section a section of AWS for your installation. It is in your new VPC that you will create the required subnets, routing tables, gateways, elastic IPs and security groups to ready the network. While the environment I’ll walk you through is intended to run Oracle WebCenter, it is applicable to any collection of systems whose foundation is Oracle Fusion Middleware.
Architectural Overview
The-VPC implements a 3-tier local high availability architecture. Each tier will be defined by its own subnet, and Amazon security groups will be used to control the flow of data between them. The tiers are:
- Web Tier – For Oracle HTTP servers (web servers) and Web Cache instances
- App Tier – For Oracle WebLogic Instances running WebCenter Portal and WebCenter Content
- DB Tier – For an Oracle Database Instances
This topology achieves its fault tolerance and increased availability by front-ending two or more Oracle HTTP Server (OHS) instances with a load balancer and running the WebCenter Portal and Content components on two or more clustered instances in an active-active configuration. The basis for this topology is detailed in the Enterprise Deployment Reference Topology for Oracle WebCenter.
Oracle uses the term “local-high-availability” topology to describe-high availability systems contained within a single data center. The term fits this configuration because it-is contained within a single availability zone (AZ) within a single AWS region. Each AZ operates in a state-of-the-art highly available Amazon data center, so it is extremely rare to have failures that affect the availability of all instances in the same location. However, should such a failure-occur, none of the instances in the affected AZ would be available. A higher degree of fault tolerance and availability can be achieved by extending this architecture across multiple AZs within a region, and for maximum availability, the architecture-could be further extended across multiple regions. Go to the AWS Architecture Center for guidance and best practices related to highly scalable and reliable applications in the AWS Cloud. For this exercise, however, we’ll stick to a single AZ.
Highly available DB services critical to applications within an enterprise are typically the domain of a dedicated Database Administration team. For that reason, I will not address high availability for WebCenter at the database level, and will focus only on the application and web tiers. In a true high availability topology the single database instance shown in the DB-tier would be expanded upon to depict a RAC DB topology, or one of several Oracle Database High Availability Architectures / Solutions.
To get started with the configuration, sign in to AWS with your Identity & Access Management (IAM) credentials.
Virtual Private Cloud (VPC) Setup
Once logged into AWS, click the console icon in the top menu bar, then go to the Virtual Private Cloud dashboard by clicking on the VPC icon.
Click the “Your VPCs” menu option then click the “Create VPC” button. On the resulting popup screen, provide a VPC name, and set the CIDR block-and tenancy-as you wish. I’ve decided to use the CIDR block-of 10.0.0.0/16. The network prefix of “/16” associates the internal network of my VPC with the first two octets and the last two identify 65,534 unique hosts. I used 10.0 for my network, but you can choose another private address range. You’ll see later how I further divide the network into 256 host IPs for 256 subnets. For tenancy I chose “Default” as I don’t need the instances in this VPC to always run on single-tenant, dedicated hardware. Click the “Yes, Create” button when done.
Once the VPC is created, edit it by clicking the Edit button on the summary tab and make sure DNS resolution and DNS hostnames are both set to yes. This is required for WebCenter software installs, but can be changed once the installs are complete.
Internet Gateway
By default, instances launched into a virtual private cloud (VPC) can’t communicate with the Internet. To enable access to the Internet you must create an Internet Gateway and attach it to the VPC. Name the gateway whatever you wish.
Routing Tables
Create two routing tables, one for traffic internal to the VPC and the other that allows external to communication across the internet. For clarity purposes, name them “int” and “ext”.
Select the external routing table (ext) you just created then click the “Routes” tab below it. Next, click the “Add another route” button and set its destination to 0.0.0.0/0 and its target to the internet gateway created in the previous step. Click “Save” when finished. This step is what gives the “ext” route table internet access.
Subnets
Create three subnets: one for the web tier, one for the app tier, and one for the DB tier. I use a naming convention that combines the environment name (dev, test, UAT, integration, production, etc.) with the tier it is intended for, but you can use whatever works for you. I use the same availability zone for each of these subnets, and my CIDR blocks are as follows:
- Integration-WebTier = 10.0.254.0/24
- Integration-AppTier = 10.0.253.0/24
- Integration-DBTier = 10.0.252.0/24
Using a network prefix of “/24” for each subnet conveniently dedicates the first three octets to the network portion of the address, with the fourth used for the host IP. Since the container VPC uses a CIDR block of 10.0.0.0/16, the first two octets of each subnet are fixed at 10.0. This allows me to use the 3rd octet to uniquely identify each of my subnets. I used 252, 253, and 254 for the DB, App and Web tier subnets respectively, but you can use whatever you want. The key thing here is that the “/16” and “/24” network prefixes make IP addresses easy to understand as the network, subnet, and host portions of the IP address break at octets. With this configuration I can create up to 256 subnets with 251 host per subnet (**NOTE: AWS reserves both the first four IP addresses and the last IP address in each subnet CIDR block, so they’re not available for our use.). Anyway, this is plenty for this exercise, and hopefully you can see how it is enough for growth should you need to support more AZs, regions, or other environments.
Each subnet must be associated with a single route table. Change the route table to “ext” for each of the subnets created. This allows internet access to the subnet, a requirement while setting up the hosts. This will be changed later when the setup is complete and the system hardened, as only the web tier requires internet access.
Elastic IPs
Create one or more Elastic IPs for the VPC. These elastic IPs will be assigned to the instances you will later create. You will uses these IPs to SSH into instances.
For simplicity sake I created one Elastic IP for each of the instances I created. You can create fewer if you’d like, however, you’ll have to manually resign them to instances as needed and track them accordingly when you SSH into the hosts.
Security Groups
Security groups act as virtual firewalls that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. Create five security groups, one for each of the three VPC subnets, another for the load balancer we’ll place in front of the web tier subnet, and a fifth for the sole purpose of allowing SSH. I used the following when creating my security groups, but you can use whatever you’d like:
Name Tag | Group Name | Description |
---|---|---|
WebCenter Front-end Load Balancer | WebLB | Web Server front-end Load Balancer |
WebCenter OHS | OHSWeb | OHS Web Tier |
WebCenter Portal and Content | WCApp | WebCenter Application Tier |
WebCenter Database | DB | Database Tier |
SSH | SSH | Allows SSH access to the instances |
Edit the inbound rules for each of the newly created security groups. The purpose of these rules is to only allow traffic from specific ports as required by the application, and only from the subnet that logically sits directly in front of it (ie: web tier in front of app tier infront of database tier).
If you use the default ports for OHS, WebCenter and Oracle DB then your security groups should look something like this:
Type | Protocol | Port Range | Source |
---|---|---|---|
HTTP (80) | TCP (6) | 80 | 0.0.0.0/0 |
Type | Protocol | Port Range | Source |
---|---|---|---|
Custom TCP Rule | TCP (6) | 6700 – 6701 | 10.0.254.0/0 |
Custom TCP Rule | TCP (6) | 7777 | 10.0.254.0/0 |
Custom TCP Rule | TCP (6) | 7785– 7789 | 10.0.254.0/0 |
Custom TCP Rule | TCP (6) | 9999 | 10.0.254.0/0 |
Custom TCP Rule | TCP (6) | 7777 | 0.0.0.0/0 |
Notice that port 7777 has a second entry with source 0.0.0.0/0. This is so that OHS can be tested without going through the load balancer. This can be removed once the test proves successful.
Type | Protocol | Port Range | Source |
---|---|---|---|
Custom TCP Rule | TCP (6) | 7001 | 10.0.254.0/0 |
Custom TCP Rule | TCP (6) | 8000 – 8080 | 10.0.254.0/0 |
Custom TCP Rule | TCP (6) | 8888—8898 | 10.0.254.0/0 |
Custom TCP Rule | TCP (6) | 16000—16400 | 10.0.254.0/0 |
Type | Protocol | Port Range | Source |
---|---|---|---|
Custom TCP Rule | TCP (6) | 1158 | 10.0.253.0/0 |
Custom TCP Rule | TCP (6) | 1121 | 10.0.253.0/0 |
Type | Protocol | Port Range | Source |
---|---|---|---|
SSH (22) | TCP (6) | 22 | 0.0.0.0/0 |
At this point your AWS Virtual Private Cloud is fully prepared to support WebCenter. In my next post I’ll cover the setup of the Elastic Compute Cloud (EC2) instances.