Bom dia a todos,
Iremos com este post fechar este ciclo sobre configuração de recursos em alta disponibilidade, utilizando ambientes Linux e a stack corosync/pacemaker.
Do que necessitamos para fornecer um serviço em alta disponibilidade?
Se pensarmos no tema, chegamos a conclusão que o nosso recurso irá necessitar pelo menos de ter um ip só seu, que não seja de nenhuma das máquinas do cluster e que possa viajar livremente entre elas, fornecendo um ponto de acesso ao serviço seja ele qual seja.
Vamos começar pois por configurar um recurso simples, apenas de IP:
# pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=10.0.0.3 cidr_netmask=24 op monitor interval=30s
O comando em si é muito simples:
Nome do recurso: ClusterIP
A secção ocf:heartbeat:IPaddr2 indica qual o resource script, o standard especifico – no nosso caso heartbeat – e finamente o ipaddr2 que é o script que irá gerir o recurso.
A restante parte, ip=10.0.0.3 cidr_netmask=24, refere-se a configuração do IP Virtual (VIP) que irá saltar entre os servidores sem pertencer especificamente a nenhum.
A secção op monitor interval=30s irá monitorizar o recurso, e em falha do mesmo irá deslocar o IP para outro dos servidores considerados válidos no cluster.
Para descobrir a lista de standards disponíveis num sistema executem:
# pcs resource standards
ocf
lsb
service
systemd
stonith
Para obter uma lista de providers de recursos OCF efetuar:
# pcs resource providers
heartbeat
pacemaker
E para saber todos os scripts de gestão de recursos, disponíveis dentro do OCF provider efetuar:
# pcs resource agents ocf:heartbeat
AoEtarget
AudibleAlarm
CTDB
ClusterMon
Delay
Dummy
EvmsSCC
Evmsd
Filesystem
ICP
…..
slapd
symlink
syslog-ng
tomcat
varnish
vmware
zabbixserver
Chegou a altura de validar como está o nosso recurso que acabamos de criar:
# pcs status
Cluster name: cluster1
Last updated: Thu Dec 17 15:59:14 2015 Last change: Thu Dec 17 15:41:05 2015 by root via cibadmin on xfvlab5
Stack: corosync
Current DC: xfvlab5 (version 1.1.13-3.fc23-44eb2dd) – partition with quorum
2 nodes and 1 resource configured
Online: [ xfvlab5 xfvlab11 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started xfvlab11
PCSD Status:
xfvlab5 member (xfvlab5): Online
xfvlab11 member (xfvlab11): Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Testando o mecanismo de alta-disponibilidade do nosso cluster, vamos efetuar um failover do recurso que configuramos, como se por exemplo um dos membros tivesse sofrido um crash:
Inicialmente:
# pcs status
Cluster name: cluster1
Last updated: Thu Dec 17 11:09:03 2015 Last change: Thu Dec 17 10:41:06 2015 by root via cibadmin on xfvlab5
Stack: corosync
Current DC: xfvlab5 (version 1.1.13-3.fc23-44eb2dd) – partition with quorum
2 nodes and 1 resource configured
Online: [ xfvlab11 xfvlab5 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started xfvlab11
PCSD Status:
xfvlab5 member (xfvlab5): Online
xfvlab11 member (xfvlab11): Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Simulando a falha:
# pcs cluster stop xfvlab11
xfvlab11: Stopping Cluster (pacemaker)…
xfvlab11: Stopping Cluster (corosync)…
[root@xfvlab11 ~]# pcs status
Error: cluster is not currently running on this node
De volta ao membro xfvlab5, conseguimos validar o status do cluster:
[root@xfvlab5 ~]# pcs status
Cluster name: cluster1
Last updated: Thu Dec 17 16:11:47 2015 Last change: Thu Dec 17 15:41:05 2015 by root via cibadmin on xvpar5
Stack: corosync
Current DC: xfvlab5 (version 1.1.13-3.fc23-44eb2dd) – partition with quorum
2 nodes and 1 resource configured
Online: [ xfvlab5 ]
OFFLINE: [ xvpar11 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started xfvlab5
PCSD Status:
xfvlab5 member (xvpar5): Online
xfvlab11 lost (xfvlab11): Offline
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Podemos validar que o antigo owner do recurso o abandonou (xfvlab11), e o xfvlab05 pegou no mesmo garantindo a disponibilidade.
O mesmo é possível validar através dos logs de sistema operativo:
Dec 17 16:10:46 xfvlab5 pengine[4017]: notice: On loss of CCM Quorum: Ignore
Dec 17 16:10:46 xfvlab5 pengine[4017]: notice: Scheduling Node xfvlab11 for shutdown
Dec 17 16:10:46 xfvlab5 pengine[4017]: notice: Move ClusterIP#011(Started xfvlab11 -> xfvlab5)
Dec 17 16:10:46 xfvlab5 crmd[4018]: notice: Initiating action 6: stop ClusterIP_stop_0 on xfvlab11
Dec 17 16:10:46 xfvlab5 pengine[4017]: notice: Calculated Transition 8: /var/lib/pacemaker/pengine/pe-input-8.bz2
Dec 17 16:10:46 xfvlab5 crmd[4018]: notice: Initiating action 7: start ClusterIP_start_0 on xfvlab5 (local)
Dec 17 16:10:46 xfvlab5 crmd[4018]: notice: do_shutdown of xfvlab11 (op 10) is complete
Dec 17 16:10:46 xfvlab5 crmd[4018]: notice: Transition aborted: Peer Halt (source=do_te_invoke:158, 0)
Dec 17 16:10:46 xfvlab5 crmd[4018]: notice: Transition aborted by transient_attributes.2 ‘create’: Transient attribute change (cib=0.8.10, source=abort_unless_down:319, path=/cib/status/node_state[@id=’2′], 0)
Dec 17 16:10:46 xfvlab5 IPaddr2(ClusterIP)[5301]: INFO: Adding inet address 10.0.0.3/24 with broadcast address 172.16.3.255 to device ens3
Dec 17 16:10:46 xfvlab5 IPaddr2(ClusterIP)[5301]: INFO: Bringing device ens3 up
Dec 17 16:10:47 xfvlab5 IPaddr2(ClusterIP)[5301]: INFO: /usr/libexec/heartbeat/send_arp -i 200 -r 5 -p /var/run/resource-agents/send_arp-172.16.3.153 ens3 172.16.3.153 auto not_used not_used
Dec 17 16:10:47 xfvlab5 attrd[4016]: notice: crm_update_peer_proc: Node xfvlab11[2] – state is now lost (was member)
Dec 17 16:10:47 xfvlab5 attrd[4016]: notice: Removing all xfvlab11 attributes for attrd_peer_change_cb
Dec 17 16:10:47 xfvlab5 attrd[4016]: notice: Removing xfvlab11/2 from the membership list
Dec 17 16:10:47 xfvlab5 attrd[4016]: notice: Purged 1 peers with id=2 and/or uname=xfvlab11 from the membership cache
Dec 17 16:10:47 xfvlab5 crmd[4018]: notice: Operation ClusterIP_start_0: ok (node=xfvlab5, call=6, rc=0, cib-update=62, confirmed=true)
Dec 17 16:10:47 xfvlab5 crmd[4018]: notice: Transition 8 (Complete=4, Pending=0, Fired=0, Skipped=1, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-8.bz2): Stopped
Dec 17 16:10:47 xfvlab5 pengine[4017]: notice: On loss of CCM Quorum: Ignore
Dec 17 16:10:47 xfvlab5 pengine[4017]: notice: Calculated Transition 9: /var/lib/pacemaker/pengine/pe-input-9.bz2
Dec 17 16:10:47 xfvlab5 crmd[4018]: notice: Initiating action 6: monitor ClusterIP_monitor_30000 on xfvlab5 (local)
Dec 17 16:10:47 xfvlab5 stonith-ng[4014]: notice: crm_update_peer_proc: Node xfvlab11[2] – state is now lost (was member)
Dec 17 16:10:47 xfvlab5 stonith-ng[4014]: notice: Removing xfvlab11/2 from the membership list
Dec 17 16:10:47 xfvlab5 stonith-ng[4014]: notice: Purged 1 peers with id=2 and/or uname=xfvlab11 from the membership cache
Dec 17 16:10:47 xfvlab5 cib[4013]: notice: crm_update_peer_proc: Node xfvlab11[2] – state is now lost (was member)
Dec 17 16:10:47 xfvlab5 cib[4013]: notice: Removing xfvlab11/2 from the membership list
Dec 17 16:10:47 xfvlab5 cib[4013]: notice: Purged 1 peers with id=2 and/or uname=xfvlab11 from the membership cache
Dec 17 16:10:47 xfvlab5 crmd[4018]: notice: Transition 9 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-9.bz2): Complete
Dec 17 16:10:47 xfvlab5 crmd[4018]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Dec 17 16:10:48 xfvlab5 ntpd[686]: Listen normally on 6 ens3 10.0.0.3 UDP 123
Dec 17 16:10:50 xfvlab5 corosync[3995]: [TOTEM ] A new membership (10.0.0.1:16) was formed. Members left: 2
Dec 17 16:10:50 xfvlab5 corosync[3995]: [QUORUM] Members[1]: 1
Dec 17 16:10:50 xfvlab5 corosync[3995]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 17 16:10:50 xfvlab5 crmd[4018]: notice: crm_reap_unseen_nodes: Node xfvlab11[2] – state is now lost (was member)
Dec 17 16:10:50 xfvlab5 crmd[4018]: warning: No match for shutdown action on 2
Dec 17 16:10:50 xfvlab5 crmd[4018]: notice: Stonith/shutdown of xfvlab11 not matched
Dec 17 16:10:50 xfvlab5 crmd[4018]: notice: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Dec 17 16:10:50 xfvlab5 pacemakerd[4012]: notice: crm_reap_unseen_nodes: Node xfvlab11[2] – state is now lost (was member)
Dec 17 16:10:50 xfvlab5 pengine[4017]: notice: On loss of CCM Quorum: Ignore
Dec 17 16:10:50 xfvlab5 pengine[4017]: notice: Calculated Transition 10: /var/lib/pacemaker/pengine/pe-input-10.bz2
Dec 17 16:10:50 xfvlab5 crmd[4018]: notice: Transition 10 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-10.bz2): Complete
Dec 17 16:10:50 xfvlab5 crmd[4018]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Agora que temos um IP valido e em alta disponibilidade, vamos configurar a componente de armazenamento centralizado e a aplicação em si.
Inicialmente será necessário atribuir um volume group / filesystem que irá conter os dados da nossa aplicação (nota que o VG tem de existir):
[root@xfvlab5~]# pcs resource create app_lvm LVM volgrpname=app_vg exclusive=true –group appgroup
Nesta configuração, o nosso recurso arranca automaticamente:
[root@xfvlab5~]# pcs resource show
Resource Group: appgroup
appgroup (ocf::heartbeat:LVM): Started
Em seguida, iremos configurar o filesystem em si e o recurso aplicacional (no nosso caso apache webserver):
[root@xfvlab5 ~]# pcs resource create app_fs Filesystem \
device=”/dev/app_vg/app_lv” directory=”/var/www” fstype=”xfs” –group appgroup
[root@xfvlab5 ~]# pcs resource create VIP IPaddr2 ip=10.0.1.100 cidr_netmask=24 –group appgroup
[root@xfvlab5 ~]# pcs resource create Website-Testes apache configfile=”/etc/httpd/conf/httpd.conf” statusurl=”http://127.0.0.1/server-status” –group appgroup
Nota: o recurso Website-Testes tem probing/monitorização de recurso. Ou seja, em caso de falha no status url, o cluster engine irá fazer failover entre os recursos, pois assume que o apache falhou.
Validando agora o nosso cluster validamos que tudo está em execução conforme esperado:
# pcs status
Cluster name: cluster1
Last updated: Thu Jan 5 16:29:06 2016
Stack: corosync
Current DC: xfvlab5 (version 1.1.13-3.fc23-44eb2dd) – partition with quorum
2 nodes and 6 resource configured
Online: [ xfvlab11 xfvlab5 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started xfvlab11
Resource Group: appgroup
app_lvm (ocf::heartbeat:LVM): Started xfvlab05
app_fs (ocf::heartbeat:Filesystem): Started xfvlab05
VIP (ocf::heartbeat:IPaddr2): Started xfvlab05
Website-Testes (ocf::heartbeat:apache): Started xfvlab05
PCSD Status:
xfvlab5 member (xfvlab5): Online
xfvlab11 member (xfvlab11): Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Ficamos com um grupo de recursos APPGROUP que contem a nossa aplicação, e um VIP ClusterIP que irá servir de interface IP de gestão com o engine do cluster em si e onde os sites fornecidos pelo apache estarão disponíveis.
Concluindo, este é um exemplo extremamente simples do funcionamento e configuração de um cluster em Linux.
Usamos no nosso exemplo a tecnologia de cluster Corosync / Pacemaker que se está a assumir como a stack de alta disponibilidade by default em Linux.
Em breve irei fazer novo post, de alta disponibilidade tendo por exemplo o luci e o ricci que fazem parte do RHCS e foi durante muito tempo a norma.
Até lá, um abraço.
Caso tenham duvidas, podem sempre me contactar através do email nuno at nuneshiggs.com