Master and slave MySQL databases in different locations¶
In this use case a master database server and a slave database server are deployed in different locations. The back-end IPv6 infrastructure provided by Dimension Data is used to replicate data continuously and securely.
As shown below, plumbery provides a streamlined definition of the overall solution, that encompasses servers location, the networking infrastructure, the security of information flows, but also the contextualisation of nodes and the small but important final brushes that are making a solution really appealing.
When starting from scratch, it takes about 15 minutes to deploy the fittings below. About half of it is related to the deployment at cloud services from Dimension data. The other half is incurred by cloud-init in the contextualisation of nodes, the software part of the solution. After that time, you can connect to the cluster and use it for real.
Requirements for this use case¶
- Deploy a master database in one data centre
- Deploy a slave database in another data centre
- Create a Network Domain at each location
- Create an Ethernet network at each location
- Allow IPv6 traffic from the master network to the slave network
- Deploy a SQL server at each location
- Add servers to the automated monitoring dashboard
- Assign public IPv4 addresses to each server
- Add address translation to ensure SSH access to the nodes from the internet
- Add firewall rule to accept TCP traffic on port 22 (ssh)
- Update etc/hosts to bind IPv6 addresses to host names
- Manage keys to suppress passwords in SSH connections
- Install MySQL at each node
- Configure the master database
- Configure the slave database
- Populate the master database
- Dump the master database and load it at the slave node
- Start the replication from the master to the slave
Fittings plan¶
The plan below demonstrates multiple interesting building blocks:
- Addition of public IPv4 and firewall rules to control access to selected servers
- Configuration of the firewall to open communications across data centres
- Automatic registration to the monitoring services provided by Dimension Data
- Management of SSH keys to enable secured communications without passwords
- Update of etc/hosts with IPv6
- Easy templating of configuration files transmitted to nodes
- Handy generation and management of secrets required at various places
- rsync on top of ipv6 to manage heavy communications between servers
- User documentation of the infrastructure is put directly in the fittings plan
Download this fittings plan if you want to hack it for yourself. This is part of the demonstration
directory of the plumbery project at GitHub. Alternatively, you can copy the
text below and put it in a text file named fittings.yaml
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 | ---
defaults:
# these directives apply to all nodes created by plumbery
#
cloud-config:
# this key will be randomly generated by plumbery on first action on this
# fittings plan, and remembered afterwards
#
ssh_keys:
rsa_private: |
{{ key.rsa_private }}
rsa_public: "{{ key.rsa_public }}"
# let cloud-init do its job depending on target operating system
#
users:
- default
# for ubuntu nodes, ensure that account ubuntu can do everything
#
- name: ubuntu
sudo: 'ALL=(ALL) NOPASSWD:ALL'
ssh-authorized-keys:
- "{{ key.rsa_public }}"
- "{{ local.rsa_public }}"
# configure SSH keys for the account root
#
- name: root
ssh-authorized-keys:
- "{{ key.rsa_public }}"
- "{{ local.rsa_public }}"
# plumbery uses the account root to rub nodes
#
disable_root: false
# plumbery can use ssh certificate to avoid passwords
#
ssh_pwauth: false
---
locationId: AU10
regionId: dd-au
blueprints:
# the blueprint 'sql' is spread over two different locations: @AU10 and @AU11
#
- sql:
# reservation of 2 public ipv4 addresses to be given to nodes afterwards
#
domain:
name: VDC1
description: "Demonstration of SQL replication"
ipv4: 2
# firewall is configured to accept ipv6 traffic from the slave
#
ethernet:
name: databases
subnet: 10.0.0.0
accept:
- AU11::databases
nodes:
- masterSQL:
# plumbery turns hashtags to groups in the inventory
#
description: "#master #database #sql server #ubuntu"
# instructions to be displayed to end user
#
information:
- "this is the SQL Master server"
- "check replication with:"
- "$ ssh ubuntu@{{ masterSQL.public }}"
- "$ mysql"
- "> show master status\\G"
- "check data with:"
- "> use db01;"
- "> select * from persons;"
- "create a record with:"
- "> insert into persons (name) values ('<some name here>');"
- "check change in GTID:"
- "> show master status\\G"
# you could use 'CentOS' or other label from CloudControl library
#
appliance: 'Ubuntu 14'
# computing power
#
cpu: 8
memory: 32
# add NAT to the internet and open firewall for ssh
#
glue:
- internet 22
# add the node to the monitoring dashboard
#
monitoring: essentials
# plumbery provides following these directives to cloud-init
#
cloud-config:
# change the hostname to our definition
#
hostname: "{{ node.name }}"
# provide a random password to mysql server -- this is remembered of course
#
bootcmd:
- echo "mysql-server mysql-server/root_password password {{ mysql_root.secret }}" | sudo debconf-set-selections
- echo "mysql-server mysql-server/root_password_again password {{ mysql_root.secret }}" | sudo debconf-set-selections
# install packages -- here, time synchronization and mysql
#
packages:
- ntp
- mysql-server-5.6
# write various files
#
write_files:
# a script in awk to update the /etc/hosts file
#
- path: /root/hosts.awk
content: |
#!/usr/bin/awk -f
/^{{ masterSQL.private }}/ {next}
/^{{ masterSQL.ipv6 }}/ {next}
/^{{ AU11::slaveSQL.ipv6 }}/ {next}
{print}
END {
print "{{ masterSQL.private }} masterSQL"
print "{{ masterSQL.ipv6 }} masterSQL"
print "{{ AU11::slaveSQL.ipv6 }} slaveSQL"
}
# a script in sed to change the configuration of mysql
#
- path: /root/my.cnf.sed
content: |
#!/usr/bin/sed
/bind-address/s/127.0.0.1/::/
s/#server-id/server-id/
/server-id/s/= 1/= 123/
s/#log_bin.*/log-bin = mysql-bin/
/max_binlog_size/a log-slave-updates\nbinlog_format = MIXED\nenforce-gtid-consistency\ngtid-mode = ON
/enforce-gtid-consistency/s/^#//
/gtid-mode/s/^#//
$!N; /^\(.*\)\n\1$/!P; D
# SQL directives to configure replication on master
#
- path: /root/master_setup.sql
content: |
GRANT REPLICATION SLAVE ON *.*
TO 'replicator'@'slaveSQL'
IDENTIFIED BY '{{ replicator.secret }}';
FLUSH PRIVILEGES;
FLUSH TABLES WITH READ LOCK;
# SQL directives to create some content in the database
#
- path: /root/master_db.sql
content: |
CREATE DATABASE db01;
USE db01;
CREATE USER 'dbuser'@'localhost' IDENTIFIED BY '{{ dbuser.secret }}';
GRANT ALL PRIVILEGES ON db01.* TO 'dbuser'@'localhost';
CREATE TABLE persons (id INT AUTO_INCREMENT,
name VARCHAR(30),
datum TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY(id));
INSERT INTO persons(name) VALUES ('Marko');
INSERT INTO persons(name) VALUES ('John');
INSERT INTO persons(name) VALUES ('Eric');
INSERT INTO persons(name) VALUES ('Clive');
INSERT INTO persons(name) VALUES ('Maria');
# allow the ubuntu account to handle the database easily
#
- path: /home/ubuntu/.my.cnf
permissions: '0400'
content: |
[client]
user=root
password={{ mysql_root.secret }}
# commands executed as root
#
runcmd:
# set keys used by account ubuntu
#
- echo "===== Handling ubuntu identity"
- cp -n /etc/ssh/ssh_host_rsa_key /home/ubuntu/.ssh/id_rsa
- cp -n /etc/ssh/ssh_host_rsa_key.pub /home/ubuntu/.ssh/id_rsa.pub
- chown ubuntu:ubuntu /home/ubuntu/.ssh/*
# host keys are changing on each cloud-init execution
#
- sed -i "/StrictHostKeyChecking/s/^.*$/ StrictHostKeyChecking no/" /etc/ssh/ssh_config
# update the /etc/hosts with awk
#
- echo "===== Updating /etc/hosts"
- cp -n /etc/hosts /etc/hosts.original
- awk -f /root/hosts.awk /etc/hosts >/etc/hosts.new && mv /etc/hosts.new /etc/hosts
# update the SQL configuration
#
- echo "===== Configuring SQL engine"
- cp -n /etc/mysql/my.cnf /etc/mysql/my.cnf.original
- sed -i -f /root/my.cnf.sed /etc/mysql/my.cnf
- /etc/init.d/mysql restart
- mysql -e "source /root/master_setup.sql" -u root -p{{ mysql_root.secret }}
- chown ubuntu:ubuntu /home/ubuntu/.my.cnf
# execute a SQL script to populate the database
#
- echo "===== Populating SQL database"
- mysql -e "source /root/master_db.sql" -u root -p{{ mysql_root.secret }}
# dump the database and prepare rsync transfer for account ubuntu
#
- echo "===== Sharing SQL dump"
- mkdir /var/rsync
- mysqldump --lock-all-tables --all-databases -u root -p{{ mysql_root.secret }} > /var/rsync/all-databases.sql
- sed -i "1i RESET MASTER;" /var/rsync/all-databases.sql
- chown -R ubuntu:ubuntu /var/rsync
---
locationId: AU11
regionId: dd-au
blueprints:
- sql:
# reservation of 2 public ipv4 addresses to be given to nodes afterwards
#
domain:
name: VDC2
description: "Demonstration of SQL replication"
ipv4: 2
# firewall is configured to accept ipv6 traffic from the master
#
ethernet:
name: databases
subnet: 10.0.0.0
accept:
- AU10::databases
nodes:
- slaveSQL:
# plumbery turns hashtags to groups in the inventory
#
description: "#slave #database #sql server #ubuntu"
# instructions to be displayed to end user
#
information:
- "this is a SQL Slave server"
- "check replication with:"
- "$ ssh ubuntu@{{ slaveSQL.public }}"
- "$ mysql"
- "> show slave status\\G"
- "check data with:"
- "> use db01;"
- "> select * from persons;"
# you could use 'CentOS' or other label from CloudControl library
#
appliance: 'Ubuntu 14'
# computing power
#
cpu: 8
memory: 32
# add NAT to the internet and open firewall for ssh
#
glue:
- internet 22
# add the node to the monitoring dashboard
#
monitoring: essentials
# plumbery provides following these directives to cloud-init
#
cloud-config:
# change the hostname to our definition
#
hostname: "{{ node.name }}"
# provide a random password to mysql server
#
bootcmd:
- echo "mysql-server mysql-server/root_password password {{ mysql_root.secret }}" | sudo debconf-set-selections
- echo "mysql-server mysql-server/root_password_again password {{ mysql_root.secret }}" | sudo debconf-set-selections
# install packages -- here, time synchronization and mysql
#
packages:
- ntp
- mysql-server-5.6
# write various files
#
write_files:
# a script in awk to update the /etc/hosts file
#
- path: /root/hosts.awk
content: |
#!/usr/bin/awk -f
/^{{ slaveSQL.private }}/ {next}
/^{{ slaveSQL.ipv6 }}/ {next}
/^{{ AU10::masterSQL.ipv6 }}/ {next}
{print}
END {
print "{{ slaveSQL.private }} slaveSQL"
print "{{ slaveSQL.ipv6 }} slaveSQL"
print "{{ AU10::masterSQL.ipv6 }} masterSQL"
}
# a script in sed to change the configuration of mysql
#
- path: /root/my.cnf.sed
content: |
#!/usr/bin/sed
/bind-address/s/127.0.0.1/::/
s/#server-id/server-id/
/server-id/s/= 1/= 456/
/server-id/a read-only = ON
s/#log_bin.*/log-bin = mysql-bin/
/max_binlog_size/a log-slave-updates\nbinlog_format = MIXED\nenforce-gtid-consistency\ngtid-mode = ON\nrelay-log = relay-log-slave
/enforce-gtid-consistency/s/^#//
/gtid-mode/s/^#//
$!N; /^\(.*\)\n\1$/!P; D
# SQL directives to configure replication on slave
#
- path: /root/slave_setup.sql
content: |
STOP SLAVE;
CHANGE MASTER TO
MASTER_HOST='masterSQL',
MASTER_USER='replicator',
MASTER_PASSWORD='{{ replicator.secret }}',
MASTER_AUTO_POSITION = 1;
START SLAVE;
# allow the ubuntu account to handle the database easily
#
- path: /home/ubuntu/.my.cnf
permissions: '0400'
content: |
[client]
user=root
password={{ mysql_root.secret }}
# commands executed as root
#
runcmd:
# set keys used by account ubuntu
#
- echo "===== Handling ubuntu identity"
- cp -n /etc/ssh/ssh_host_rsa_key /home/ubuntu/.ssh/id_rsa
- cp -n /etc/ssh/ssh_host_rsa_key.pub /home/ubuntu/.ssh/id_rsa.pub
- chown ubuntu:ubuntu /home/ubuntu/.ssh/*
# host keys are changing on each cloud-init execution
#
- sed -i "/StrictHostKeyChecking/s/^.*$/ StrictHostKeyChecking no/" /etc/ssh/ssh_config
# update the /etc/hosts with awk
#
- echo "===== Updating /etc/hosts"
- cp -n /etc/hosts /etc/hosts.original
- awk -f /root/hosts.awk /etc/hosts >/etc/hosts.new && mv /etc/hosts.new /etc/hosts
# update the SQL configuration
#
- echo "===== Configuring SQL engine"
- cp -n /etc/mysql/my.cnf /etc/mysql/my.cnf.original
- sed -i -f /root/my.cnf.sed /etc/mysql/my.cnf
- /etc/init.d/mysql restart
- chown ubuntu:ubuntu /home/ubuntu/.my.cnf
# be sure that master will be ready to provide database dump
#
- echo "===== Being patient with master server"
- sleep 1m
# download database dump in rsync secured by ssh tunnel
#
- echo "===== Getting and loading SQL dump"
- rsync -zhave "ssh -i /home/ubuntu/.ssh/id_rsa" ubuntu@masterSQL:/var/rsync/all-databases.sql /tmp/all-databases.sql
- mysql -e "source /tmp/all-databases.sql" -u root -p{{ mysql_root.secret }}
# actual beginning of the replication between master and slave
#
- echo "===== Starting SQL replication"
- mysql -e "source /root/slave_setup.sql" -u root -p{{ mysql_root.secret }}
|
Please note that in this example both servers are exposed to public Internet. In the real life this would probably not be the case, since database would be accessed by application servers from within private back-end networks.
Deployment commands¶
In this case, the blueprint sql
is spread over two different
data centres. For this reason, plumbery will connect separately
to each data centre and to the dirty job to make you happy.
$ python -m plumbery fittings.yaml deploy
This command will build fittings as per the provided plan, and start servers as well. Look at messages displayed by plumbery while it is working, so you can monitor what’s happening.
Follow-up commands¶
At the end of the deployment, plumbery will display on screen some instructions to help you move forward. You can ask plumbery to display this information at any time with the following command:
$ python -m plumbery fittings.yaml information
Since servers are up and running, you are invited to play a bit with them, and show evidence of data replication. For example, you could open two additional terminal windows, one for the master server and the other for the slave server. Then connect by ssh, using the ubuntu account, and enter mysql directly.
On the master side, you can type these commands in sequence:
use db01;
select * from persons;
show master status \G
Then move to the slave side, and check status of the server:
use db01;
select * from persons;
show slave status \G
At this stage, the slave server should report the same GTID index than the master.
Move back to the master server, and create a new record in the table:
insert into persons (name) values ('Alfred');
show master status \G
The last command should show a progress in the GTID information. How is this reflected on slave side? There you can type the following:
select * from persons;
show slave status \G
The SELECT statement should reflect the record created on the other side. And the SHOW statement should follow the evolution of the GTID on the master side.
Troubleshooting¶
The fittings plan is using multiple secrets, and most of them have been used by plumbery to configure the solution dynamically. If you need to retrieve one of these secrets, for example, the root password for SQL, then use the following command:
$ python -m plumbery fittings.yaml secrets
Destruction commands¶
At the end of the demonstration, you may want to reduce costs with the following:
$ python -m plumbery fittings.yaml stop
$ python -m plumbery fittings.yaml destroy