Saturday, February 23, 2019

OpenAPI Experience

Hi folk,

Last week I was researching a little more, so I deepened my knowledge in programming APIs and found an interesting OpenAPI path.



You do not have to have a deep knowledge in programming, with knowing a bit of node.js and python and understanding what a express is enough.

What I was pleasantly surprised is the Swagger utility to convert the comments of the exposed services to perform tests.

In short, good experience and for more! I recommend it to you.


Let's go!

Sunday, April 1, 2018

ALM, DevOps, CI, CD and more CD. Life is one and it is a carnival ...

Hi folk,

I am currently involved in a cloud computing service project using the DevOps solution of IBM Cloud for the management applications, which is known as ALM (Application Lifecycle Management)

Like all of us who dedicate ourselves to this wonderful and "logical" world of computing, we constantly use terms like ALM, DevOps, Continuous Integration, Continuous Delivery and Continuous Deployment.

All these terms are like a masks carnival that everything seems the same but it is not.



Do we really know its meaning?

I will try to summarize the meaning of each of the terms in this post in a "logical" and simple way.

ALM (Application Lifecycle Management)


Process of incorporating, coordinating and monitoring all the activities necessary for the creation of a software solution or application, from the birth of the need, going through its definition, development, testing, deployment and maintenance.

DevOps (Development + Operations)


It is an agile use practice that focuses on collaboration and effective communication between departments to achieve maximum collaboration and integration between software development and operations without forgetting the business and testing. The tools that use this practice of use what they do is eliminate different walls between departments, so that there are no dead times between the development and the execution environments.

CI (Continuous integration)


It is a software development discipline that usually refers mostly to the automation of integration, construction and frequent testing in our development environment. CI tries to avoid integration errors that usually occur when developers integrate new changes.

CD (Continuous Delivery)


It is a software development discipline in which it is built in such a way that the software can be released into production at any time. It is an extension of CI that does not necessarily imply that a version is released each time there is a change. There is a human intervention who must make the decision to release the change. In agile practice is the "Product Owner", or in other words, responsible for the business.

CD (Continuous Deployment)


It is a software development discipline with which every change that passes all stages of its production chain is released to its customers. There is no human intervention, and only a failed test will prevent a new change in production from being implemented. Continuous implementation goes one step beyond continuous delivery.

So?


Therefore it is understood that any ALM process can have agile practices using different DevOps context tools to ensure that the different software disciplines CI (Continuous integration), CD (Continuous Delivery) and CD (Continuous Deployment) guarantee:

  • Less errors in programming
  • Easy version construction
  • Tests are reduced
  • The complexity of the deployment is reduced
  • There is less pressure when making changes to the versions
  • It can develop more quickly
  • Customers can see the continuous progress of quality tasks

What a carnival of terms!

We put it all together and as Maluma says:

"You do not have to suffer, you do not have to cry, life is one and it's a carnival"

Happy Sunday to all!!!!

Sunday, November 26, 2017

Install IBM Cloud Private v2.1.0

Hi folk,

In this post I want to show you how I made an installation of a private cloud in a local environment using IBM Cloud Private v2.1.0.

The first thing I do is an estimate of the minimum necessary infrastructure based on the minimum requirements of the product.

For this example I have used the following servers with the minimum requirements:
 
Nodo
Hostname
IP
RAM
Disk space
Boot
icpboot
192.0.2.153
4 GB
100 GB
Management
icpmanagement
192.0.2.151
8 GB
100 GB
Master
icpmaster
192.0.2.155
4 GB
150 GB
Proxy
icpproxy
192.0.2.154
4 GB
40 GB
Worker 1
icpworker1
192.0.2.148
4 GB
100 GB
Worker 2
icpworker2
192.0.2.149
4 GB
100 GB
Worker 3
Icpworker3
192.0.2.150
4 GB
100 GB
 
Important: The boot, master, proxy, and management nodes in your cluster must use the same platform architecture.

The operating system used for all nodes is Ubuntu 16.04 LTS.
 

Boot node 

A boot or bootstrap node is used for running installation, configuration, node scaling, and cluster updates. Only one boot node is required for any cluster. You can use a single node for both master and boot.

Management node

A management node is an optional node that only hosts management services like monitoring, metering, and logging. By configuring dedicated management nodes, you can prevent the master node from becoming overloaded.

Master node

A master node provides management services and controls the worker nodes in a cluster. Master nodes host processes that are responsible for resource allocation, state maintenance, scheduling, and monitoring. Multiple master nodes are in a high availability (HA) environment to allow for failover if the leading master host fails. Hosts that can act as the master are called master candidates.

Proxy node

A proxy node is a node that transmits external request to the services created inside your cluster. Multiple proxy nodes are deployed in a high availability (HA) environment to allow for failover if the leading proxy host fails. While you can use a single node as both master and proxy, it is best to use dedicated proxy nodes to reduce the load on the master node. A cluster must contain at least one proxy node if load balancing is required inside the cluster.

Worker node

A worker node is a node that provides a containerized environment for running tasks. As demands increase, more worker nodes can easily be added to your cluster to improve performance and efficiency. A cluster can contain any number of worker nodes, but a minimum of one worker node is required.

Preparing your cluster for installation

Note: For this example of installation I used the root user to gain time copying and pasting commands from one node terminal open to another.
1) Ensure network connectivity across all nodes in your cluster
2) Enable remote login as root on each node in your cluster

$ sed -i 's/prohibit-password/yes/' /etc/ssh/sshd_config

$ systemctl restart ssh
 
3) Ensure that all default ports are open but are not in use. No firewall rules must block these ports.
4) Configure the /etc/hosts file on each node in your cluster.

$ su -
$ vi /etc/hosts


Important:
  • Ensure that the host name is listed by the IP address for the local host. You cannot list the host name by the loopback address, 127.0.0.1
  • Host names in the /etc/hosts file cannot contain uppercase letters
  • Comment out or remove the line file that begins with 127.0.1.1
5) Synchronize the clocks in each node in the cluster. To synchronize your clocks, you can use network time protocol (NTP). In this example, on each node in your cluster I I'm going to use the NTP server installed in management node.

$ su -
$ apt-get install -y ntp
$ vi /etc/ntp.conf

Add pool icpmanagement to configuration file.


Restart NTP and verify status

$ systemctl restart ntp
$ ntpq -p



6) Configure the Virtual Memory setting on each node in your cluster
 
$ sysctl -w vm.max_map_count=262144

Make the changes permanent by adding the following line to the bottom of the /etc/sysctl.conf file
 
$ vi /etc/sysctl.conf 

Add line:
vm.max_map_count=262144


To check the current value use the command:
 
$ sysctl vm.max_map_count

7) Install docker on each node in your cluster.
Update your ubuntu repositories and install extra packages

$ apt-get update

$ apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual

$ apt-get install -y apt-transport-https ca-certificates curl software-properties-common
$ apt-get install software-properties-common
$ apt-get install curl

Note: Some dependency packages may already be installed in the system.

Add Docker’s official GPG key
 
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

Setup the docker stable repository, update the local cache and install

$ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb\_release -cs) stable"
$ apt-get update
$ apt-get install -y docker-ce

Makes sure docker is running

$ docker run hello-world



Instalar Python y pip

$ apt-get install -y python-setuptools

$ easy_install pip


8) Restart all cluster nodes

$ shutdown -r now

Configure passwordless SSH

1) Configure passwordless SSH from the boot node to all other nodes. Accept the default location for ssh-keygen.

$ su -
$ ssh-keygen -t rsa -P ''

2) Copy the resulting id_rsa key file to each node in the cluster (including the boot node on which we are currently operating).

 
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpboot
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpmanagement
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpmaster
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpproxy
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpworker1
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpworker2
$ icpboot$ ssh-copy-id -i .ssh/id_rsa root@icpworker3

Install IBM Cloud Private

1) Create a directory to hold installation configuration files in boot node and move the binaries to the directory created.

$ mkdir -p  /opt/icp/cluster/images

2) From installation image, copy (or move) ibm-cloud-private-x86_64-2.1.0.tar.gz to /opt/icp/cluster/images/
3) Upload the ICp image inside Docker (this operation takes approximately 45 mins)

$ cd /opt/icp/cluster/images/
$ tar xf tar xf ibm-cloud-private-x86_64-2.1.0.tar.gz -O | docker load 

4) Extract configuration examples (create sample data structure in cluster folder)

$ cd /opt/icp/
$ docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception:2.1.0-ee cp -r cluster /data

5) Copy the ssh key in the installation directory

$ cp ~/.ssh/id_rsa /opt/icp/cluster/ssh_key
$ chmod 400 /opt/icp/cluster/ssh_key

6) Edit the /opt/icp/cluster/hosts file and enter the IP addresses of all nodes

$ vi /opt/icp/cluster/hosts

[boot]
192.0.2.153

[master]
192.0.2.155

[worker]
192.0.2.148
192.0.2.149
192.0.2.150

[proxy]
192.0.2.154

[management]
192.0.2.151

7) Configure the cluster by editing the file config.yaml modifying the ranges of the IPs

$ vi /opt/icp/cluster/config.yaml


## Network in IPv4 CIDR format
network_cidr: 192.1.0.0/16

## Kubernetes Settings
service_cluster_ip_range: 192.0.2.1/24



8) Deploy application IBM Cloud private


$ cd /opt/icp/cluster

$ docker run --net=host -t -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception:2.1.0-ee install | tee install.log


This process is of long duration and you can verify the completion of the deployment in the trace file install.log.



9) Verify the status of your installation


If the installation succeeded, the access information for your cluster is displayed with a browser attach to https://master_ip:8443 with credentials admin/admin.

https://192.0.2.155:8443


And that's it! Easy, no?

For more information:

- IBM Cloud Private 2.1 - Overview

If you encounter errors during installation, see Troubleshooting.

Cheers!!!

Saturday, September 16, 2017

¿Qué está pasando con nuestra privacidad en la red? GDPR, te necesito...

Hace 5 días se publicó una noticia que más o menos intuía, tarde o temprano sabría que tendría que ocurrir y bueno, lo que no sabemos, mejor no saberlo de momento...
Protección de Datos multa a Facebook con 1,2 millones por usar información sin permiso
Ya sabemos "el pago que tenemos" que hacer por usar redes sociales que se nos proporcionan como "gratis" en internet.
Pero la mayoría de nosotros sabemos que las marcas utilizan nuestros datos para fines de marketing y ventas aun conociendo los derechos con respecto a nuestros datos personales.

A día de hoy somos conscientes de la amenaza del robo de datos cibernético con todos los dispositivos que utilizamos y están conectados entre si.

¿Quien puede controlar y regular que los datos privados que proporcionamos para que no se usen sin nuestro permiso?

Se entiende que todos los países tiene su propia agencia de protección de datos, en España disponemos de la Agencia Española de Protección de Datos (AEDP), pero que pasa en Francia, Alemania? ¿Se aplican las mismas multas?

Bueno, pues la entrada en vigor en 2018 de la norma europea GDPR, endurece las multas por el uso indebido de información personal y exige más atención a la intimidad del usuario y/o cliente.

¿Qué es GDPR?
 El GDPR (General Data Protection Regulation) pretende crear un marco legal de protección de datos en toda la Unión Europea, con el objetivo de devolver el control a los ciudadanos sobre sus datos personales, imponiendo a su vez reglas estrictas sobre quienes alojen y ‘traten’ estos datos, en cualquier lugar del mundo. El reglamento también presenta reglas referentes a la libre circulación de datos personales dentro y fuera de la Unión Europea.

La fecha de implantación definitiva de esta norma entra en vigor el 25 de mayo de 2018 y a la mayoría de las organizaciones les preocupan las importantes sanciones financieras que puede imponer el reglamento por no cumplir la norma y así, afianzar y generar confianza entre sus clientes.
Ayer tuve la fortuna de asistir a un evento dentro de mi casa, IBM, sobre este tema que creo que nos preocupa a todos, por lo menos a mi desde el punto de vista de usuario.

IBM ofrece un enfoque completo para prepararse para el cumplimiento del GDPR con soluciones y servicios, desde la evaluación hasta su implementación completa. El enfoque cubre todas las actividades necesarias para dar soporte al GDPR en cinco dominios:
  • Gobierno de GDPR
  • Comunicación y formación de los empleados
  • Procesos
  • Datos
  • Seguridad
Como se puede entender también, IBM no proporciona asesoramiento legal, de contabilidad ni de auditoría, ni declara ni garantiza que sus servicios o productos son garantía de que las empresas cumplen con las leyes o normativas vigentes ya que las empresas son responsables de garantizar el cumplimiento de las leyes y normativas aplicables, incluyendo el GDPR de la Unión Europea.
IBM ayuda a la empresas a alinearse con la norma y soportar la creación de gobierno en el contexto de GDPR.

Más información al respecto aquí.

Volviendo al comienzo, noticias como está son necesarias para que los usuarios tengamos esa sensación que estamos protegidos ante tanta recogida de datos, públicos y privados.
Yo, y sin que nadie piense lo contrario, seguiré usando las redes sociales a nivel privado y a nivel profesional. Tengo amigos por todo el mundo, mi trabajo y mi vida privada solo me permite decir de vez en cuando "hola" o "hi". Y eso, por poco que sea, es mejor que nada.

Sobre el uso de las redes sociales y el impacto en la sociedad, generando millones de información "Big Data", ya hablaré en otro post, ya que como bien se sabe, se van incluyendo más iteraciones con usuarios como por ejemplo, captar los sentimientos personales.

Feliz fin de semana!