Red Hat has released OpenShift v3 last week at the summit. Time for me to go out and explore the new features of OpenShift.
I thought the beta training content would be a good starting point. Soon I had to learn, that some of the example files were not adjusted for latest kubernetes API changes.
This blog post explains what steps are necessary to a working OpenShift v3 Installation. This is not one of these Zero to PaaS type of Screencasts, which just show how fast you can get something simple running, but rather a step-by-step instruction on how to get to a real-life Installation inside your datacenter that you can use to evaluate OpenShift. The final environment will contain:
- 1 master which also acts as a node
- 2 more dedicated nodes
Prerequisites
I am using Amazon ec2 for my testing, but you can use whatever you would like to use. I have created a little helper repository to get up and running quickly.
Please note that I am using the Enterprise version and not the Origin Version. So make sure you have the necessary Entitlements handy. This installation assumes that you have the machines on which you would like to install OpenShift Enterprise v3 on installed with the latest RHEL 7.1
openshift-master.openshift.me
points to the ip of the desired masteropenshift-node1.openshift.me
points to the ip of the desired node1openshift-node2.openshift.me
points to the ip of the desired node2*.go.openshift.me
which points to the ip address of the desired master (This should point to the node that will host the OpenShift Router, which is the master in our case).
Another requirement is a working DNS. My setup cosists of three m3.large machines and I have reserved a dedicated domain (openshift.me) that I have configured the following way.
Let us start by cloning the helper repository to our local workstation.
[code language=”plain”]
cd ~
mkdir openshift
cd openshift
git clone https://github.com/juhoffma/openshift-kickstarter.git
cd openshift-kickstarter
[/code]
This repository contains a couple of files, which I already explaind in the README. The next thing to do is to go ahead and copy the env.example to env and fill in the correct values. When that is done, source the file so the variables are set in your environment.
[code language=”plain”]
cp env.example env
vi env
source env
[/code]
As I am using ansible to do all the prereqs on the hosts, you have to fill in the ansible_inventory
file with your environment. Remember how I set my hostnames? I explained that above. You will find the same names and roles as described inside of the file. Please adjust it to match your environment but keep the sections
and [openshift_nodes]
in tact.[openshift_master]
The next file to edit is the example_hosts
file. This file is required by the oo-install-ose
tool which you will call later in the process. Don’t worry too much about it now. Just make sure you keep the sections intact and the names match your environment, as above.
The last file to change are the lines labeled with nodes:
and masters:
inside the file installer.cfg.yml
. I am just listing the lines below, so it is clear which I mean:
[code language=”plain”]
masters: [openshift-master.openshift.me]
nodes: [openshift-node1.openshift.me, openshift-node2.openshift.me, openshift-master.openshift.me]
[/code]
This should cover all the pre-requisites for the kickstarter.
Installation
After you have prepared the files, we can go ahead and run ansible to handle all the prerequisites. on the systems. Here is what the kickstarter will do for you (on all of your hosts).
- Register your systems against RHN and attach to the correct pool
- Disable All Repositories and only subscribe the ones needed by OpenShift Enterprise
- Install deltaRPM
- Remove NetworkManager
- Install all required packages by the OpenShift Installer
- Run yum update on the systems
- Install docker
- Alter the docker registry to allow for insecure registries
- Pull the docker images from registry.access.redhat.com
- Dsitribute the openshift_aws SSH key to the systems
To start the ansible based preparation of the systems run the following command:
[code language=”plain”]
ansible-playbook -v -i ./ansible_inventory ./openshift_prereqs.yml
[/code]
On the master, it will also prepare the installer, so you only have to do minimal configuration changes and can run the installer immediately. The ansible playbook might take several minutes to complete. It is highly dependant on your internet connection speed.
After it finished you can go ahead and log into your master as root and run the following command:
[code language=”plain”]./oo-install-ose[/code]
This command installs OpenShift Enterprise into your infrastructure. The installation process might also take several minutes to complete. But stay strong, we are almost done.
First Steps
Now that the installation is finished, there is not very much left for us to do. What we need is:
- Check whether every node is schedulable
- Check whether the labels have been applied correctly
- Deploy a docker registry to hold all the images, and
- Deploy a router who will serve as an endpoint to our external requests.
First let us check the node configuration. Run the command oc get nodes
which should return the following output:
[code language=”plain”]
[root@ip-172-31-18-252 ~]# oc get nodes
NAME LABELS STATUS
ip-172-31-18-251.eu-central-1.compute.internal kubernetes.io/hostname=ip-172-31-18-251.eu-central-1.compute.internal,region=primary,zone=east Ready
ip-172-31-18-252.eu-central-1.compute.internal kubernetes.io/hostname=ip-172-31-18-252.eu-central-1.compute.internal,region=infra,zone=default Ready
ip-172-31-18-253.eu-central-1.compute.internal kubernetes.io/hostname=ip-172-31-18-253.eu-central-1.compute.internal,region=primary,zone=west Ready
[/code]
If you need to make a node schedulable run the command
[code language=”plain”]
oadm manage-node ip-172-31-18-252.eu-central-1.compute.internal –schedulable=true
[/code]
To assign labels to nodes, you can run the following commands:
[code language=”plain”]
oc label node ip-172-31-18-252.eu-central-1.compute.internal region=infra zone=default
oc label node ip-172-31-18-251.eu-central-1.compute.internal region=primary zone=east
oc label node ip-172-31-18-253.eu-central-1.compute.internal region=primary zone=west
[/code]
You will need the selectors, for the commands in the training manual to run successfully. The Managing Nodes article in the openshift documentation is a great resource to review what we are doing.
To deploy a registry run the following command. Take special note on the–selector
attribute which makes sure to place the registry on the host who is labelled with region=infra
[code language=”plain”]
oadm registry –credentials=/etc/openshift/master/openshift-registry.kubeconfig –images=’registry.access.redhat.com/openshift3/ose-${component}:${version}’ –selector="region=infra"
[/code]
Since I have not yet been able to pass the mount-host option successfully to the oadm registry
command, I am happy to take any comments.
Now the only thing left is to deploy a router. A router is able to present different SSL Certificates to the clients, depending on what resource they are requesting. In the training material we reference the domain *.cloudapps.example.com
. I am using the wildcard domain *.go.openshift.me
. To make sure that the router is able to present a SSL Certificate to the client we have to create that first. We are using the built-in CA for that.
[code language=”plain”]
CA=/etc/openshift/master
oadm create-server-cert –signer-cert=$CA/ca.crt –signer-key=$CA/ca.key –signer-serial=$CA/ca.serial.txt –hostnames=’*.go.openshift.me’ –cert=cloudapps.crt –key=cloudapps.key
[/code]
The --cert
and --key
just represent the later filenames so they can really be anything. Let us merge the two separate files into one.
[code language=”plain”]cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem[/code]
And now we can deploy the router (see the selector again?)
[code language=”plain”]
oadm router –default-cert=cloudapps.router.pem –credentials=/etc/openshift/master/openshift-router.kubeconfig –selector=’region=infra’ –images=’registry.access.redhat.com/openshift3/ose-${component}:${version}’
[/code]
It can take a bit for the router to deploy. Once it is deployed, you will see something like
[code language=”plain”]
[root@ip-172-31-18-252 ~]# oc get pods
NAME READY REASON RESTARTS AGE
docker-registry-1-edt0z 1/1 Running 0 5
drouter-1-wje21 1/1 Running 1 5d
[/code]
Now it is time to check the status page of the HA Proxy Router. The status page is still secured by iptables, so let us open the port:
[code language=”plain”]
iptables -I OS_FIREWALL_ALLOW -p tcp -m tcp –dport 1936 -j ACCEPT
[/code]
Now you are all set and most of the training material should work. I have opened a pull request for the Training material which contains my modified files
Conclusion
This will hopefully get you started quickly with OpenShift version 3. It would be great if you could leave some comments.
5 replies on “An Introduction into OpenShift v3”
I just managed to get a repository running by following Chapter 1.5.2.1 ( https://access.redhat.com/beta/documentation/en/openshift-enterprise-30-administrator-guide#deploying-a-docker-registry )
Just not yet sure if it is really persistant ….
Patrick
Hey Patrick, great to see you got it working. Do you mind sharing what you used as –mount-host= ?
/mnt/registry
Great blog entry! Thank you Buddy.
I am building something similar running in vagrant and this is really useful.
Hi,
I believe the mount-host option won’t work without adding a service account and adding it to the privileged SSC. I went with NFS shared storage by editing the deploymentConfig after creating the registry – this will force the registry pods to be redeployed with your changes :-
# oc edit dc/docker-registry – replace emptyDir
volumeMounts:
– mountPath: /registry
name: registry-storage
…
volumes:
– name: registry-storage
nfs:
path: /exports/registry
server: NNN.NNN.NNN.NNN