Sunday, November 26, 2017

Install IBM Cloud Private v2.1.0

Hi folk,

In this post I want to show you how I made an installation of a private cloud in a local environment using IBM Cloud Private v2.1.0.

The first thing I do is an estimate of the minimum necessary infrastructure based on the minimum requirements of the product.

For this example I have used the following servers with the minimum requirements:
 
Nodo
Hostname
IP
RAM
Disk space
Boot
icpboot
192.0.2.153
4 GB
100 GB
Management
icpmanagement
192.0.2.151
8 GB
100 GB
Master
icpmaster
192.0.2.155
4 GB
150 GB
Proxy
icpproxy
192.0.2.154
4 GB
40 GB
Worker 1
icpworker1
192.0.2.148
4 GB
100 GB
Worker 2
icpworker2
192.0.2.149
4 GB
100 GB
Worker 3
Icpworker3
192.0.2.150
4 GB
100 GB
 
Important: The boot, master, proxy, and management nodes in your cluster must use the same platform architecture.

The operating system used for all nodes is Ubuntu 16.04 LTS.
 

Boot node 

A boot or bootstrap node is used for running installation, configuration, node scaling, and cluster updates. Only one boot node is required for any cluster. You can use a single node for both master and boot.

Management node

A management node is an optional node that only hosts management services like monitoring, metering, and logging. By configuring dedicated management nodes, you can prevent the master node from becoming overloaded.

Master node

A master node provides management services and controls the worker nodes in a cluster. Master nodes host processes that are responsible for resource allocation, state maintenance, scheduling, and monitoring. Multiple master nodes are in a high availability (HA) environment to allow for failover if the leading master host fails. Hosts that can act as the master are called master candidates.

Proxy node

A proxy node is a node that transmits external request to the services created inside your cluster. Multiple proxy nodes are deployed in a high availability (HA) environment to allow for failover if the leading proxy host fails. While you can use a single node as both master and proxy, it is best to use dedicated proxy nodes to reduce the load on the master node. A cluster must contain at least one proxy node if load balancing is required inside the cluster.

Worker node

A worker node is a node that provides a containerized environment for running tasks. As demands increase, more worker nodes can easily be added to your cluster to improve performance and efficiency. A cluster can contain any number of worker nodes, but a minimum of one worker node is required.

Preparing your cluster for installation

Note: For this example of installation I used the root user to gain time copying and pasting commands from one node terminal open to another.
1) Ensure network connectivity across all nodes in your cluster
2) Enable remote login as root on each node in your cluster

$ sed -i 's/prohibit-password/yes/' /etc/ssh/sshd_config

$ systemctl restart ssh
 
3) Ensure that all default ports are open but are not in use. No firewall rules must block these ports.
4) Configure the /etc/hosts file on each node in your cluster.

$ su -
$ vi /etc/hosts


Important:
  • Ensure that the host name is listed by the IP address for the local host. You cannot list the host name by the loopback address, 127.0.0.1
  • Host names in the /etc/hosts file cannot contain uppercase letters
  • Comment out or remove the line file that begins with 127.0.1.1
5) Synchronize the clocks in each node in the cluster. To synchronize your clocks, you can use network time protocol (NTP). In this example, on each node in your cluster I I'm going to use the NTP server installed in management node.

$ su -
$ apt-get install -y ntp
$ vi /etc/ntp.conf

Add pool icpmanagement to configuration file.


Restart NTP and verify status

$ systemctl restart ntp
$ ntpq -p



6) Configure the Virtual Memory setting on each node in your cluster
 
$ sysctl -w vm.max_map_count=262144

Make the changes permanent by adding the following line to the bottom of the /etc/sysctl.conf file
 
$ vi /etc/sysctl.conf 

Add line:
vm.max_map_count=262144


To check the current value use the command:
 
$ sysctl vm.max_map_count

7) Install docker on each node in your cluster.
Update your ubuntu repositories and install extra packages

$ apt-get update

$ apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual

$ apt-get install -y apt-transport-https ca-certificates curl software-properties-common
$ apt-get install software-properties-common
$ apt-get install curl

Note: Some dependency packages may already be installed in the system.

Add Docker’s official GPG key
 
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

Setup the docker stable repository, update the local cache and install

$ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb\_release -cs) stable"
$ apt-get update
$ apt-get install -y docker-ce

Makes sure docker is running

$ docker run hello-world



Instalar Python y pip

$ apt-get install -y python-setuptools

$ easy_install pip


8) Restart all cluster nodes

$ shutdown -r now

Configure passwordless SSH

1) Configure passwordless SSH from the boot node to all other nodes. Accept the default location for ssh-keygen.

$ su -
$ ssh-keygen -t rsa -P ''

2) Copy the resulting id_rsa key file to each node in the cluster (including the boot node on which we are currently operating).

 
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpboot
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpmanagement
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpmaster
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpproxy
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpworker1
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpworker2
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpworker3

Install IBM Cloud Private

1) Create a directory to hold installation configuration files in boot node and move the binaries to the directory created.

$ mkdir -p  /opt/icp/cluster/images

2) From installation image, copy (or move) ibm-cloud-private-x86_64-2.1.0.tar.gz to /opt/icp/cluster/images/
3) Upload the ICp image inside Docker (this operation takes approximately 45 mins)

$ cd /opt/icp/cluster/images/
$ tar xf tar xf ibm-cloud-private-x86_64-2.1.0.tar.gz -O | docker load 

4) Extract configuration examples (create sample data structure in cluster folder)

$ cd /opt/icp/
$ docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception:2.1.0-ee cp -r cluster /data

5) Copy the ssh key in the installation directory

$ cp ~/.ssh/id_rsa /opt/icp/cluster/ssh_key
$ chmod 400 /opt/icp/cluster/ssh_key

6) Edit the /opt/icp/cluster/hosts file and enter the IP addresses of all nodes

$ vi /opt/icp/cluster/hosts

[boot]
192.0.2.153

[master]
192.0.2.155

[worker]
192.0.2.148
192.0.2.149
192.0.2.150

[proxy]
192.0.2.154

[management]
192.0.2.151

7) Configure the cluster by editing the file config.yaml modifying the ranges of the IPs

$ vi /opt/icp/cluster/config.yaml


## Network in IPv4 CIDR format
network_cidr: 192.1.0.0/16

## Kubernetes Settings
service_cluster_ip_range: 192.0.2.1/24



8) Deploy application IBM Cloud private


$ cd /opt/icp/cluster

$ docker run --net=host -t -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception:2.1.0-ee install | tee install.log


This process is of long duration and you can verify the completion of the deployment in the trace file install.log.



9) Verify the status of your installation


If the installation succeeded, the access information for your cluster is displayed with a browser attach to https://master_ip:8443 with credentials admin/admin.

https://192.0.2.155:8443


And that's it! Easy, no?

For more information:

- IBM Cloud Private 2.1 - Overview

If you encounter errors during installation, see Troubleshooting.

Cheers!!!

3 comments: