公司网站搭建的架构

article/2025/10/7 4:14:17

目录

  • 简介
  • 拓扑图
  • 需求
    • 首先先搭建好MHA集群
      • 跟新主机时间
      • 修改主机名
      • 配置所有主机之间SSH无密码验证
      • 将私钥发送到所有主机(包括本机)
      • 将下载好的软件包上传到主机
      • 配置本地yum源
      • 解压软件包
      • 在manager主机和各个node节点安装软件依赖包
      • 安装MHA manager依赖的perl模块包
      • 安装MHA manager软件包
      • 搭建主从复制环境
      • 登陆到mysql-01主机(创建一个测试库)
      • 授权
      • 查看状态
      • 将数据导出并发送到其他两台mysql上
      • 导入数据
      • 添加权限
      • 修改配置文件(mysql-02和mysql-03是同样的步骤)
      • 建立主从关系
      • 查看主从是否设置成功
      • 两台slave服务器设置read_only
      • 配置MHA
        • 创建MHA的相关工作目录,并创建相关的配置文件
        • 编辑
        • 检查SSH配置
      • 检查整个复制环境状态
      • 检查MHA manager状态
      • 查看启动日志
      • 关闭监控
      • 在主库上创建vip
      • 在在主配置文件中开启脚本
      • 编写脚本/usr/bin/master_ip_failover,要会perl脚本语言
      • 给脚本添加执行权限
      • 检查SSH配置
      • 检查整个复制环节
      • 开启监控
      • 查看MHA manager是否正常
      • 查看启动日志
      • 打开新的日志窗口观察vip和主从是否漂移
    • 搭建ceph集群
      • 根新主机时间
      • 修改host文件
      • 做SSH免密登陆
      • 将密钥发送到所有主机
      • 上传软件包并解压
      • 配置ceph的yum源
      • 将解压的软件包和yum源发送
      • 安装epel-release(所有节点)
      • 在所有的主机上部署ceph
      • 在管理节点上部署服务
      • 修改副本数
      • 安装ceph monitor
      • 收集节点的keyring文件
      • 查看密钥
      • 部署osd服务
      • 使用ceph自动分区
      • 添加osd节点
      • 查看osd状态
      • 部署mgr管理服务
      • 统一集群配置
      • 各节点修改ceph.client.admin.keyring权限
      • 部署mds服务
      • 查看mds服务
      • 查看集群状态
      • 现在开始创建ceph文件系统
      • 创建存储池
      • 创建文件系统
      • 查看ceph文件系统
      • 查看mds节点状态
    • 备份mysql数据到ceph
      • 创建ceph的RBD
      • 创建rbd存储池
      • 创建指定大小的块设备作为磁盘文件
      • 查看test1的信息
      • 映射块设备,即用rbd把镜像名映射为内核模块
      • 查看一下
      • 创建挂载目录
      • 格式化分区
      • 挂载
      • 写入数据测试
      • 查看一下
    • 安装ansible
      • 配置yum源
      • 上传软件包
      • 安装ansible
      • 修改主机时间
      • 配置主机清单
      • 测试一下
      • 配置免密访问
      • 修改主机清单
      • 安装服务
      • 安装
      • 挂载ceph文件系统到web服务器
      • 编辑文件
      • 安装软件包
      • 挂载
      • 查看一下
    • 搭建LVS+keepalived
      • 安装依赖包
      • 上传软件包
      • 解压软件包
      • 预编译
      • 编译安装
      • 配置keepalived+LVS-DR模式
      • 添加软连接
      • 创建目录
      • 复制配置文件到刚才创建的目录
      • 修改配置文件
      • 重启keepalived并设置开机自启
      • 查看一下
      • 备用节点s_director配置
      • 修改配置文件
      • 重启并设置开机自启
      • 查看一下
      • 测试主从切换
      • 修改nginx服务(同样的步骤在nginx-01和nginx-02上都要操作)
      • 生效配置文件
      • 配置nginx的vip
      • 重启网卡并查看vip
      • 修改主页文件(做测试用)
      • 安装ipvsadm命令,并添加规则
      • 添加服务器节点
      • 重启服务
      • 查看一下
      • 访问测试
    • 搭建discuz论坛
      • 上传软件包
      • 解决依赖关系
      • 安装 libmcrypt
      • 解压php包
      • 安装php
      • 编译及安装
      • 生成php.ini脚本
      • 修改fpm配置php-fpm.conf.default文件名称
      • 修改配置文件
      • 复制启动脚本到init.d下
      • 赋予执行权限
      • 添加开机自启
      • 启动服务
      • 查看端口监听状态
      • 修改nginx.conf配置文件
      • 生效配置文件
      • 创建index.php和test.php文件
      • 测试
      • 修改默认运行账户
      • 下载软件包
      • 创建站点目录
      • 解压软件包
      • 建立虚拟主机
      • 添加权限
      • 重启nginx
      • 创建数据库
      • 开始访问并安装
    • 安装zabbix(在nginx02上搭建)
      • 解压软件包并配置zabbix源
      • 解决依赖关系
      • 安装libmcrypt
      • 安装php
      • 修改配置文件
      • 创建php-fpm服务启动脚本
      • 修改配置文件
      • 启动php-fpm服务
      • 修改nginx配置文件支持php
      • 重载配置文件
      • 创建测试页
      • 测试
      • 创建zabbix使用的数据库
      • 导入数据库
      • 解决依赖关系
      • 创建zabbix用户
      • 预编译
      • 安装
      • 添加软连接
      • 配置zabbix_server.conf
      • 配置zabbix监控本身
      • 启动
      • 添加zabbix启动脚本
      • 配置zabbix的web界面
      • 启动zabbix_agnetd
      • 配置web页面
      • 修改为中文界面
      • 解决中文乱码问题
    • 搭建DNS服务
      • 启动named并设置开机自启
      • 查看端口
      • 修改配置文件
      • 检查一下
      • 编辑正向解析配置文件
      • 检查正向解析和反向解析配置文件
      • 修改属组
      • 测试

简介

公司现阶段需要搭建一个技术论坛对外网提供服务,网站设计要求达到高可用,高负载,并且添加监控。

拓扑图

在这里插入图片描述

需求

1、使用LVS+ keeplive实现负载均衡
2、使用MHA搭建mysql集群
3、使用ceph集群实现web网站内容一致
4、搭建discuz论坛
5、搭建DNS解析网站域名
6、使用zabbix监控各个服务器硬件指标及服务端口
7、备份mysql数据库到ceph集群
8、使用ansble批量部署nginx、apache,nginx和apache必须为源码包安装。

首先先搭建好MHA集群

主机名IP
mysql-01192.168.1.2
mysql-02192.168.1.3
mysql-03192.168.1.4
mha192.168.1.5

跟新主机时间

所有主机都需要根新

[root@mysql-01 ~]# ntpdate ntp1.aliyun.com6 Apr 15:34:50 ntpdate[1467]: step time server 120.25.115.20 offset -28798.923817 sec
[root@mysql-01 ~]# 

做个计划任务

[root@mysql-01 ~]# crontab -l
30 * * * * ntpdate ntp1.aliyun.com

修改主机名

[root@mysql ~]# hostnamectl set-hostname mysql-01
[root@mysql ~]# bash   #这里需要bash刷新一下环境,才会显示新修改的主机名
[root@mysql-01 ~]# 
[root@mysql ~]# hostnamectl set-hostname mysql-02
[root@mysql ~]# bash   #这里需要bash刷新一下环境,才会显示新修改的主机名
[root@mysql-02 ~]# 
[root@mysql ~]# hostnamectl set-hostname mysql-03
[root@mysql ~]# bash   #这里需要bash刷新一下环境,才会显示新修改的主机名
[root@mysql-03 ~]# 
[root@mysql ~]# hostnamectl set-hostname mha
[root@mysql ~]# bash   #这里需要bash刷新一下环境,才会显示新修改的主机名
[root@mha ~]# 

配置所有主机之间SSH无密码验证

所有主机之间要相互SSH无秘

[root@mysql-01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): #这里什么都不输入,直接回车
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): #这里什么都不输入,直接回车
Enter same passphrase again: #这里什么都不输入,直接回车
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:u1htfi6dAAP6Fx4plXeAfvGJfmANLO9Gcde8GflzpPk root@mysql-01
The key's randomart image is:
+---[RSA 2048]----+
|         +..   o.|
|      . = * o .o+|
|     . + = X o +=|
|    . . B B + o+o|
|     . oS@ .   .o|
|      . oo= .   E|
|       .o.o+ .   |
|       o +. +    |
|      . . .+.    |
+----[SHA256]-----+
[root@mysql-01 ~]# 

此时密钥对已经创建完成。

将私钥发送到所有主机(包括本机)

[root@mysql-01 ~]# for i in 2 3 4 5;do ssh-copy-id 192.168.1.$i;done

测试一下

[root@mysql-01 ~]# for i in 2 3 4 5;do ssh root@192.168.1.$i hostname;done
mysql-01
mysql-02
mysql-03
mha

显示正常

将下载好的软件包上传到主机

链接:https://pan.baidu.com/s/1hRiV4jF7w9WaG5brhdRRkA
提取码:agp6
–来自百度网盘超级会员V2的分享

[root@mysql-01 ~]# ls
auto_install_mysql_cpu4.sh  mha4mysql-manager-0.57-0.el7.noarch.rpm  mhapath.tar.gz  mysql-community-5.7.26-1.el7.src.rpm
boost_1_59_0                mha4mysql-node-0.57-0.el7.noarch.rpm     mysql-5.7.26    rpmbuild
[root@mysql-01 ~]# 

此时可以看见我们已经上传好了mha4mysql-manager-0.57-0.el7.noarch.rpm,mha4mysql-node-0.57-0.el7.noarch.rpm,mhapath.tar.gz这三个包

配置本地yum源

[root@mysql-01 ~]# vim /etc/yum.repos.d/mhapath.repo [mhapath]
name=mhapath
baseurl=file:///root/mhapath
enabled=1
gpgcheck=0

解压软件包

[root@mysql-01 ~]# tar -zxvf mhapath.tar.gz

将解压好的软件包和yum文件发送到其他主机上

[root@mysql-01 ~]# for i in 3 4 5;do scp -r /root/mhapath root@192.168.1.$i:~;done
[root@mysql-01 ~]# for i in 3 4 5;do scp -r /etc/yum.repos.d/ root@192.168.1.$i:/etc/;done
[root@mysql-01 ~]# for i in 3 4 5;do scp mha4mysql-node-0.57-0.el7.noarch.rpm root@192.168.1.$i:~;done
mha4mysql-node-0.57-0.el7.noarch.rpm                                                        100%   35KB  13.8MB/s   00:00    
mha4mysql-node-0.57-0.el7.noarch.rpm                                                        100%   35KB  14.6MB/s   00:00    
mha4mysql-node-0.57-0.el7.noarch.rpm                                                        100%   35KB  18.6MB/s   00:00

在manager主机和各个node节点安装软件依赖包

在所有主机上都要执行这两条命令

[root@mysql-01 ~]# yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager --skip-broken --nogpgcheck
[root@mysql-01 ~]# rpm -ivh mha4mysql-node-0.57-0.el7.noarch.rpm 
Preparing...                          ################################# [100%]
Updating / installing...1:mha4mysql-node-0.57-0.el7        ################################# [100%]
[root@mysql-01 ~]# 

安装MHA manager依赖的perl模块包

[root@mha ~]# yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN

安装MHA manager软件包

[root@mysql-01 ~]# scp mha4mysql-manager-0.57-0.el7.noarch.rpm root@192.168.1.5:~
mha4mysql-manager-0.57-0.el7.noarch.rpm 
[root@mha ~]# rpm -ivh mha4mysql-manager-0.57-0.el7.noarch.rpm 
Preparing...                          ################################# [100%]
Updating / installing...1:mha4mysql-manager-0.57-0.el7     ################################# [100%]
[root@mha ~]# 

搭建主从复制环境

首先要先在所有mysql主机上安装半同步插件,在关闭mysql-01的数据库并修改配置文件,在重启mysql主机

[root@mysql-01 ~]# mysql -uroot -p
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.26 Source distributionCopyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.01 sec)mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.00 sec)mysql> 
[root@mysql-01 ~]# systemctl stop mysql
[root@mysql-01 ~]# vim /etc/my.cnfdatadir=/data/mysql/data
port=3306
socket=/usr/local/mysql/mysql.sock
symbolic-links=0
character-set-server=utf8
log-error=/data/mysql/log/mysqld.log
pid-file=/usr/local/mysql/mysqld.pid
server-id=1  #从这里开始添加
log-bin=/data/mysql/log/mysql-bin
log-bin-index=/data/mysql/log/mysql-bin.index
binlog_format=mixed
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=10000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log=/data/mysql/log/relay-bin
relay-log-index=/data/mysql/log/slave-relay-bin.index
log_slave_updates=1[root@mysql-01 ~]# systemctl restart mysql

登陆到mysql-01主机(创建一个测试库)

创建HA库并创建stu表,并插入数据

mysql> create database HA;
Query OK, 1 row affected (10.01 sec)mysql> use HA;
Database changed
mysql> create table stu(id int,name varchar(20));
Query OK, 0 rows affected (0.00 sec)mysql> insert into stu values(1,'lisi');
Query OK, 1 row affected (0.02 sec)mysql> 

授权

创建用于主从复制的用户,并赋予权限,之后刷新权限使其生效。

mysql> grant replication slave on *.* to hello@'192.168.1.%' identified by '1';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)mysql> 

授权给manager主机

mysql> grant all privileges on *.* to manager@'192.168.1.%' identified by '1';
Query OK, 0 rows affected, 1 warning (0.01 sec)mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)mysql> 

查看状态

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |     1655 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)mysql> 

将数据导出并发送到其他两台mysql上

[root@mysql-01 ~]# mysqldump -uroot -p1 -B HA>HA.sql
mysqldump: [Warning] Using a password on the command line interface can be insecure.
[root@mysql-01 ~]# for i in 3 4;do scp HA.sql root@192.168.1.$i:~;done
HA.sql                                                                                        100% 1940     1.2MB/s   00:00    
HA.sql                                                                                        100% 1940     2.1MB/s   00:00    
[root@mysql-01 ~]# 

导入数据

分别将数据导入到两台mysql数据库里

[root@mysql-02 ~]# mysql -uroot -p1 < HA.sql 
mysql: [Warning] Using a password on the command line interface can be insecure.
[root@mysql-02 ~]# mysql -uroot -p1
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.26 Source distributionCopyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> show tables;
ERROR 1046 (3D000): No database selected
mysql> use HA;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
mysql> show tables;
+--------------+
| Tables_in_HA |
+--------------+
| stu          |
+--------------+
1 row in set (0.00 sec)mysql> 

添加权限

分别在剩下的两台mysql上添加权限

mysql> grant replication slave on *.* to hello@'192.168.1.%' identified by '1';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> grant all privileges on *.* to manager@'192.168.1.%' identified by '1';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)mysql> 

修改配置文件(mysql-02和mysql-03是同样的步骤)

分别修改两台mysql的配置文件,首先需要停止mysql在修改配置文件,修改完成之后重启即可。

[root@mysql-02 ~]# systemctl stop mysql
[root@mysql-02 ~]# vim /etc/my.cnf[mysqld]
basedir=/usr/local/mysql
datadir=/data/mysql/data
port=3306
socket=/usr/local/mysql/mysql.sock
symbolic-links=0
character-set-server=utf8
log-error=/data/mysql/log/mysqld.log
pid-file=/usr/local/mysql/mysqld.pid
server-id=2   #需要注意这里,三台mysql的id不能一致
log-bin=/data/mysql/log/mysql-bin
log-bin-index=/data/mysql/log/mysql-bin.index
binlog_format=mixed
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=10000
rpl_semi_sync_slave_enabled=1
relay_log_purge=0
relay-log=/data/mysql/log/relay-bin
relay-log-index=/data/mysql/log/slave-relay-bin.index
log_slave_updates=1[root@mysql-02 ~]# systemctl restart mysql

如果这里报错,则表示上面的半同步插件没有在这台mysql上安装,需要吧配置文件里面新添加的先注释掉,然后启动mysql进到mysql中安装半同步插件,然后在把配置文件中的注释删掉,在重启mysql就可以了。

建立主从关系

首先进入到mysql中,然后关闭slave复制功能,指定主库的ip地址,指定主库用于复制的用户,指定主库用于复制用户的密码,指定主库的binlog日志文件,指定主库binlog文件的起始位置。
在开启slave复制

[root@mysql-02 ~]# mysql -uroot -p1
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.26-log Source distributionCopyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> change master to master_host='192.168.1.2',master_user='hello',master_password='1',master_log_file='mysql-bin.000001',master_log_pos=1655;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

查看主从是否设置成功

mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.1.2Master_User: helloMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000001Read_Master_Log_Pos: 1655Relay_Log_File: relay-bin.000002Relay_Log_Pos: 320Relay_Master_Log_File: mysql-bin.000001Slave_IO_Running: Yes   #此时我们看见这里为yes表示,IO没有问题Slave_SQL_Running: Yes   #此时我们看见这里为yes表示,SQL没有问题Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 1655Relay_Log_Space: 521Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1Master_UUID: f9cf2bb0-9f99-11ec-9e14-000c294a561eMaster_Info_File: /data/mysql/data/master.infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Slave has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 
1 row in set (0.00 sec)mysql> 

如果IO和SQL为no时,我们需要重新设置用于复制的权限,重新指定主数据库。

两台slave服务器设置read_only

从库对外提供读服务,只所以没有写进配置文件,是因为slave随时会提升为master

mysql> set global read_only=1;
Query OK, 0 rows affected (0.00 sec)

到这里整个集群环境已经搭建完毕,剩下的就是配置MHA软件了。

配置MHA

创建MHA的相关工作目录,并创建相关的配置文件

[root@mha ~]# mkdir -p /var/log/masterha/app1
[root@mha ~]# mkdir -p /etc/masterha

编辑

[root@mha ~]# vim /etc/masterha/app1.cnf[server default]
manager_workdir=/var/log/masterha/app1
master_binlog_dir=/data/mysql/log
#master_ip_failover_script=/usr/bin/master_ip_failover
#master_ip_online_change_script=/usr/bin/master_ip_online_changeuser=manager
password=1
ping_interval=1
remote_workdir=/tmp
repl_user=hello   #需要注意的是,这里的用户是用于复制的用户,也就是slave用户
repl_password=1   #这里是slave用户的密码
report_script=/usr/local/send_report
shutdown_script=""
ssh_user=root[server1]
hostname=192.168.1.2
port=3306[server2]
hostname=192.168.1.3
port=3306[server3]
hostname=192.168.1.4
port=3306

检查SSH配置

[root@mha ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Wed Apr  6 19:46:50 2022 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Apr  6 19:46:50 2022 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Wed Apr  6 19:46:50 2022 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Wed Apr  6 19:46:50 2022 - [info] Starting SSH connection tests..
Wed Apr  6 19:46:51 2022 - [debug] 
Wed Apr  6 19:46:50 2022 - [debug]  Connecting via SSH from root@192.168.1.2(192.168.1.2:22) to root@192.168.1.3(192.168.1.3:22)..
Wed Apr  6 19:46:50 2022 - [debug]   ok.
Wed Apr  6 19:46:50 2022 - [debug]  Connecting via SSH from root@192.168.1.2(192.168.1.2:22) to root@192.168.1.4(192.168.1.4:22)..
Wed Apr  6 19:46:50 2022 - [debug]   ok.
Wed Apr  6 19:46:51 2022 - [debug] 
Wed Apr  6 19:46:51 2022 - [debug]  Connecting via SSH from root@192.168.1.3(192.168.1.3:22) to root@192.168.1.2(192.168.1.2:22)..
Wed Apr  6 19:46:51 2022 - [debug]   ok.
Wed Apr  6 19:46:51 2022 - [debug]  Connecting via SSH from root@192.168.1.3(192.168.1.3:22) to root@192.168.1.4(192.168.1.4:22)..
Wed Apr  6 19:46:51 2022 - [debug]   ok.
Wed Apr  6 19:46:52 2022 - [debug] 
Wed Apr  6 19:46:51 2022 - [debug]  Connecting via SSH from root@192.168.1.4(192.168.1.4:22) to root@192.168.1.2(192.168.1.2:22)..
Wed Apr  6 19:46:51 2022 - [debug]   ok.
Wed Apr  6 19:46:51 2022 - [debug]  Connecting via SSH from root@192.168.1.4(192.168.1.4:22) to root@192.168.1.3(192.168.1.3:22)..
Wed Apr  6 19:46:51 2022 - [debug]   ok.
Wed Apr  6 19:46:52 2022 - [info] All SSH connection tests passed successfully.

这里看见ALL SSH connection tests passed successfully就表示成功了

检查整个复制环境状态

[root@mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Wed Apr  6 21:18:00 2022 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Apr  6 21:18:00 2022 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Wed Apr  6 21:18:00 2022 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Wed Apr  6 21:18:00 2022 - [info] MHA::MasterMonitor version 0.57.
Wed Apr  6 21:18:01 2022 - [info] GTID failover mode = 0
Wed Apr  6 21:18:01 2022 - [info] Dead Servers:
Wed Apr  6 21:18:01 2022 - [info] Alive Servers:
Wed Apr  6 21:18:01 2022 - [info]   192.168.1.2(192.168.1.2:3306)
Wed Apr  6 21:18:01 2022 - [info]   192.168.1.3(192.168.1.3:3306)
Wed Apr  6 21:18:01 2022 - [info]   192.168.1.4(192.168.1.4:3306)
Wed Apr  6 21:18:01 2022 - [info] Alive Slaves:
Wed Apr  6 21:18:01 2022 - [info]   192.168.1.3(192.168.1.3:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 21:18:01 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 21:18:01 2022 - [info]   192.168.1.4(192.168.1.4:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 21:18:01 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 21:18:01 2022 - [info] Current Alive Master: 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 21:18:01 2022 - [info] Checking slave configurations..
Wed Apr  6 21:18:01 2022 - [info] Checking replication filtering settings..
Wed Apr  6 21:18:01 2022 - [info]  binlog_do_db= , binlog_ignore_db= 
Wed Apr  6 21:18:01 2022 - [info]  Replication filtering check ok.
Wed Apr  6 21:18:01 2022 - [info] GTID (with auto-pos) is not supported
Wed Apr  6 21:18:01 2022 - [info] Starting SSH connection tests..
Wed Apr  6 21:18:03 2022 - [info] All SSH connection tests passed successfully.
Wed Apr  6 21:18:03 2022 - [info] Checking MHA Node version..
Wed Apr  6 21:18:03 2022 - [info]  Version check ok.
Wed Apr  6 21:18:03 2022 - [info] Checking SSH publickey authentication settings on the current master..
Wed Apr  6 21:18:03 2022 - [info] HealthCheck: SSH to 192.168.1.2 is reachable.
Wed Apr  6 21:18:04 2022 - [info] Master MHA Node version is 0.57.
Wed Apr  6 21:18:04 2022 - [info] Checking recovery script configurations on 192.168.1.2(192.168.1.2:3306)..
Wed Apr  6 21:18:04 2022 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql/log --output_file=/tmp/save_binary_logs_test --manager_version=0.57 --start_file=mysql-bin.000001 
Wed Apr  6 21:18:04 2022 - [info]   Connecting to root@192.168.1.2(192.168.1.2:22).. Creating /tmp if not exists..    ok.Checking output directory is accessible or not..ok.Binlog found at /data/mysql/log, up to mysql-bin.000001
Wed Apr  6 21:18:04 2022 - [info] Binlog setting check done.
Wed Apr  6 21:18:04 2022 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Wed Apr  6 21:18:04 2022 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.1.3 --slave_ip=192.168.1.3 --slave_port=3306 --workdir=/tmp --target_version=5.7.26-log --manager_version=0.57 --relay_log_info=/data/mysql/data/relay-log.info  --relay_dir=/data/mysql/data/  --slave_pass=xxx
Wed Apr  6 21:18:04 2022 - [info]   Connecting to root@192.168.1.3(192.168.1.3:22).. Checking slave recovery environment settings..Opening /data/mysql/data/relay-log.info ... ok.Relay log found at /data/mysql/log, up to relay-bin.000002Temporary relay log file is /data/mysql/log/relay-bin.000002Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.done.Testing mysqlbinlog output.. done.Cleaning up test file(s).. done.
Wed Apr  6 21:18:04 2022 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.1.4 --slave_ip=192.168.1.4 --slave_port=3306 --workdir=/tmp --target_version=5.7.26-log --manager_version=0.57 --relay_log_info=/data/mysql/data/relay-log.info  --relay_dir=/data/mysql/data/  --slave_pass=xxx
Wed Apr  6 21:18:04 2022 - [info]   Connecting to root@192.168.1.4(192.168.1.4:22).. Checking slave recovery environment settings..Opening /data/mysql/data/relay-log.info ... ok.Relay log found at /data/mysql/log, up to relay-bin.000002Temporary relay log file is /data/mysql/log/relay-bin.000002Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.done.Testing mysqlbinlog output.. done.Cleaning up test file(s).. done.
Wed Apr  6 21:18:04 2022 - [info] Slaves settings check done.
Wed Apr  6 21:18:04 2022 - [info] 
192.168.1.2(192.168.1.2:3306) (current master)+--192.168.1.3(192.168.1.3:3306)+--192.168.1.4(192.168.1.4:3306)Wed Apr  6 21:18:04 2022 - [info] Checking replication health on 192.168.1.3..
Wed Apr  6 21:18:04 2022 - [info]  ok.
Wed Apr  6 21:18:04 2022 - [info] Checking replication health on 192.168.1.4..
Wed Apr  6 21:18:04 2022 - [info]  ok.
Wed Apr  6 21:18:04 2022 - [warning] master_ip_failover_script is not defined.
Wed Apr  6 21:18:04 2022 - [warning] shutdown_script is not defined.
Wed Apr  6 21:18:04 2022 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.

此时看见ok表示成功了。(如果显示为NOT ok表示失败了。可以尝试重新设置权限,并重新做主从)

检查MHA manager状态

开启MHA监控

[root@mha ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf \
>   --remove_dead_master_conf  --ignore_last_failover < /dev/null > \
>   /var/log/masterha/app1/manager.log 2>&1 &
[1] 5180
[root@mha ~]# masterha_check_status --conf=/etc/masterha/app1.cnf 
app1 (pid:5180) is running(0:PING_OK), master:192.168.1.2
[root@mha ~]# 

此时显示正常,并显示主库ip
注意:如果正常,会显示"PING_OK",否则会显示"NOT_RUNNING",这代表MHA监控没有开启。

查看启动日志

[root@mha ~]# tail -20 /var/log/masterha/app1/manager.log Checking slave recovery environment settings..Opening /data/mysql/data/relay-log.info ... ok.Relay log found at /data/mysql/log, up to relay-bin.000002Temporary relay log file is /data/mysql/log/relay-bin.000002Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.done.Testing mysqlbinlog output.. done.Cleaning up test file(s).. done.
Wed Apr  6 21:22:57 2022 - [info] Slaves settings check done.
Wed Apr  6 21:22:57 2022 - [info] 
192.168.1.2(192.168.1.2:3306) (current master)+--192.168.1.3(192.168.1.3:3306)+--192.168.1.4(192.168.1.4:3306)Wed Apr  6 21:22:57 2022 - [warning] master_ip_failover_script is not defined.
Wed Apr  6 21:22:57 2022 - [warning] shutdown_script is not defined.
Wed Apr  6 21:22:57 2022 - [info] Set master ping interval 1 seconds.
Wed Apr  6 21:22:57 2022 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
Wed Apr  6 21:22:57 2022 - [info] Starting ping health check on 192.168.1.2(192.168.1.2:3306)..
Wed Apr  6 21:22:57 2022 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
[root@mha ~]# 

其中"Ping(SELECT) succeeded, waiting until MySQL doesn’t respond…"说明整个系统已经开始监控了。
此时左右的MHA搭建完毕。现在需要创建vip

关闭监控

[root@mha ~]# masterha_stop --conf=/etc/masterha/app1.cnf 
Stopped app1 successfully.
[1]+  Exit 1                  nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha/app1/manager.log 2>&1
[root@mha ~]# 

在主库上创建vip

创建vip并查看一下

[root@mysql-01 ~]# ifconfig ens33:1 192.168.1.200 netmask 255.255.255.0 up
[root@mysql-01 ~]# ifconfig 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.1.2  netmask 255.255.255.0  broadcast 192.168.1.255inet6 fe80::8513:8f3a:aa86:c310  prefixlen 64  scopeid 0x20<link>ether 00:0c:29:4a:56:1e  txqueuelen 1000  (Ethernet)RX packets 17075  bytes 7135592 (6.8 MiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 25434  bytes 46457937 (44.3 MiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0ens33:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.1.200  netmask 255.255.255.0  broadcast 192.168.1.255ether 00:0c:29:4a:56:1e  txqueuelen 1000  (Ethernet)lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 127.0.0.1  netmask 255.0.0.0inet6 ::1  prefixlen 128  scopeid 0x10<host>loop  txqueuelen 1000  (Local Loopback)RX packets 193  bytes 38912 (38.0 KiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 193  bytes 38912 (38.0 KiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0[root@mysql-01 ~]# 

在在主配置文件中开启脚本

[root@mha ~]# vim /etc/masterha/app1.cnf [server default]
manager_workdir=/var/log/masterha/app1
manager_log=/var/log/masterha/app1/manager.log
master_binlog_dir=/data/mysql/log
master_ip_failover_script=/usr/bin/master_ip_failover  #将这一行的注释取消掉
#master_ip_online_change_script=/usr/bin/master_ip_online_changeuser=manager
password=1
ping_interval=1
remote_workdir=/tmp
repl_user=hello
repl_password=1
report_script=/usr/local/send_report
shutdown_script=""
ssh_user=root[server1]
hostname=192.168.1.2
port=3306[server2]
hostname=192.168.1.3
port=3306[server3]
hostname=192.168.1.4
port=3306

编写脚本/usr/bin/master_ip_failover,要会perl脚本语言

[root@mha ~]# vim /usr/bin/master_ip_failover#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';use Getopt::Long;my ($command,          $ssh_user,        $orig_master_host, $orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
);my $vip = '192.168.1.200/24';    #这里的ip必须是刚才设置的vip
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig ens33:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig ens33:$key down";GetOptions('command=s'          => \$command,'ssh_user=s'         => \$ssh_user,'orig_master_host=s' => \$orig_master_host,'orig_master_ip=s'   => \$orig_master_ip,'orig_master_port=i' => \$orig_master_port,'new_master_host=s'  => \$new_master_host,'new_master_ip=s'    => \$new_master_ip,'new_master_port=i'  => \$new_master_port,
);exit &main();sub main {print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";if ( $command eq "stop" || $command eq "stopssh" ) {my $exit_code = 1;eval {print "Disabling the VIP on old master: $orig_master_host \n";&stop_vip();$exit_code = 0;};if ($@) {warn "Got Error: $@\n";exit $exit_code;}exit $exit_code;}elsif ( $command eq "start" ) {my $exit_code = 10;eval {print "Enabling the VIP - $vip on the new master - $new_master_host \n";&start_vip();$exit_code = 0;};if ($@) {warn $@;exit $exit_code;}exit $exit_code;}elsif ( $command eq "status" ) {print "Checking the Status of the script.. OK \n";#`ssh $ssh_user\@cluster1 \" $ssh_start_vip \"`;else {&usage();exit 1;}
}# A simple system call that enable the VIP on the new master
sub start_vip() {`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}sub usage {print"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}                               

给脚本添加执行权限

[root@mha ~]# chmod +x /usr/bin/master_ip_failover

检查SSH配置

[root@mha ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Wed Apr  6 22:00:36 2022 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Apr  6 22:00:36 2022 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Wed Apr  6 22:00:36 2022 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Wed Apr  6 22:00:36 2022 - [info] Starting SSH connection tests..
Wed Apr  6 22:00:36 2022 - [debug] 
Wed Apr  6 22:00:36 2022 - [debug]  Connecting via SSH from root@192.168.1.2(192.168.1.2:22) to root@192.168.1.3(192.168.1.3:22)..
Wed Apr  6 22:00:36 2022 - [debug]   ok.
Wed Apr  6 22:00:36 2022 - [debug]  Connecting via SSH from root@192.168.1.2(192.168.1.2:22) to root@192.168.1.4(192.168.1.4:22)..
Wed Apr  6 22:00:36 2022 - [debug]   ok.
Wed Apr  6 22:00:37 2022 - [debug] 
Wed Apr  6 22:00:36 2022 - [debug]  Connecting via SSH from root@192.168.1.3(192.168.1.3:22) to root@192.168.1.2(192.168.1.2:22)..
Wed Apr  6 22:00:37 2022 - [debug]   ok.
Wed Apr  6 22:00:37 2022 - [debug]  Connecting via SSH from root@192.168.1.3(192.168.1.3:22) to root@192.168.1.4(192.168.1.4:22)..
Wed Apr  6 22:00:37 2022 - [debug]   ok.
Wed Apr  6 22:00:37 2022 - [debug] 
Wed Apr  6 22:00:37 2022 - [debug]  Connecting via SSH from root@192.168.1.4(192.168.1.4:22) to root@192.168.1.2(192.168.1.2:22)..
Wed Apr  6 22:00:37 2022 - [debug]   ok.
Wed Apr  6 22:00:37 2022 - [debug]  Connecting via SSH from root@192.168.1.4(192.168.1.4:22) to root@192.168.1.3(192.168.1.3:22)..
Wed Apr  6 22:00:37 2022 - [debug]   ok.
Wed Apr  6 22:00:37 2022 - [info] All SSH connection tests passed successfully.
[root@mha ~]# 

检查整个复制环节

[root@mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf 
Wed Apr  6 22:03:46 2022 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Apr  6 22:03:46 2022 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Wed Apr  6 22:03:46 2022 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Wed Apr  6 22:03:46 2022 - [info] MHA::MasterMonitor version 0.57.
Wed Apr  6 22:03:47 2022 - [info] GTID failover mode = 0
Wed Apr  6 22:03:47 2022 - [info] Dead Servers:
Wed Apr  6 22:03:47 2022 - [info] Alive Servers:
Wed Apr  6 22:03:47 2022 - [info]   192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:03:47 2022 - [info]   192.168.1.3(192.168.1.3:3306)
Wed Apr  6 22:03:47 2022 - [info]   192.168.1.4(192.168.1.4:3306)
Wed Apr  6 22:03:47 2022 - [info] Alive Slaves:
Wed Apr  6 22:03:47 2022 - [info]   192.168.1.3(192.168.1.3:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:03:47 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:03:47 2022 - [info]   192.168.1.4(192.168.1.4:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:03:47 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:03:47 2022 - [info] Current Alive Master: 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:03:47 2022 - [info] Checking slave configurations..
Wed Apr  6 22:03:47 2022 - [info] Checking replication filtering settings..
Wed Apr  6 22:03:47 2022 - [info]  binlog_do_db= , binlog_ignore_db= 
Wed Apr  6 22:03:47 2022 - [info]  Replication filtering check ok.
Wed Apr  6 22:03:47 2022 - [info] GTID (with auto-pos) is not supported
Wed Apr  6 22:03:47 2022 - [info] Starting SSH connection tests..
Wed Apr  6 22:03:49 2022 - [info] All SSH connection tests passed successfully.
Wed Apr  6 22:03:49 2022 - [info] Checking MHA Node version..
Wed Apr  6 22:03:49 2022 - [info]  Version check ok.
Wed Apr  6 22:03:49 2022 - [info] Checking SSH publickey authentication settings on the current master..
Wed Apr  6 22:03:49 2022 - [info] HealthCheck: SSH to 192.168.1.2 is reachable.
Wed Apr  6 22:03:49 2022 - [info] Master MHA Node version is 0.57.
Wed Apr  6 22:03:49 2022 - [info] Checking recovery script configurations on 192.168.1.2(192.168.1.2:3306)..
Wed Apr  6 22:03:49 2022 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql/log --output_file=/tmp/save_binary_logs_test --manager_version=0.57 --start_file=mysql-bin.000001 
Wed Apr  6 22:03:49 2022 - [info]   Connecting to root@192.168.1.2(192.168.1.2:22).. Creating /tmp if not exists..    ok.Checking output directory is accessible or not..ok.Binlog found at /data/mysql/log, up to mysql-bin.000001
Wed Apr  6 22:03:49 2022 - [info] Binlog setting check done.
Wed Apr  6 22:03:49 2022 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Wed Apr  6 22:03:49 2022 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.1.3 --slave_ip=192.168.1.3 --slave_port=3306 --workdir=/tmp --target_version=5.7.26-log --manager_version=0.57 --relay_log_info=/data/mysql/data/relay-log.info  --relay_dir=/data/mysql/data/  --slave_pass=xxx
Wed Apr  6 22:03:49 2022 - [info]   Connecting to root@192.168.1.3(192.168.1.3:22).. Checking slave recovery environment settings..Opening /data/mysql/data/relay-log.info ... ok.Relay log found at /data/mysql/log, up to relay-bin.000002Temporary relay log file is /data/mysql/log/relay-bin.000002Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.done.Testing mysqlbinlog output.. done.Cleaning up test file(s).. done.
Wed Apr  6 22:03:50 2022 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='manager' --slave_host=192.168.1.4 --slave_ip=192.168.1.4 --slave_port=3306 --workdir=/tmp --target_version=5.7.26-log --manager_version=0.57 --relay_log_info=/data/mysql/data/relay-log.info  --relay_dir=/data/mysql/data/  --slave_pass=xxx
Wed Apr  6 22:03:50 2022 - [info]   Connecting to root@192.168.1.4(192.168.1.4:22).. Checking slave recovery environment settings..Opening /data/mysql/data/relay-log.info ... ok.Relay log found at /data/mysql/log, up to relay-bin.000002Temporary relay log file is /data/mysql/log/relay-bin.000002Testing mysql connection and privileges..mysql: [Warning] Using a password on the command line interface can be insecure.done.Testing mysqlbinlog output.. done.Cleaning up test file(s).. done.
Wed Apr  6 22:03:50 2022 - [info] Slaves settings check done.
Wed Apr  6 22:03:50 2022 - [info] 
192.168.1.2(192.168.1.2:3306) (current master)+--192.168.1.3(192.168.1.3:3306)+--192.168.1.4(192.168.1.4:3306)Wed Apr  6 22:03:50 2022 - [info] Checking replication health on 192.168.1.3..
Wed Apr  6 22:03:50 2022 - [info]  ok.
Wed Apr  6 22:03:50 2022 - [info] Checking replication health on 192.168.1.4..
Wed Apr  6 22:03:50 2022 - [info]  ok.
Wed Apr  6 22:03:50 2022 - [info] Checking master_ip_failover_script status:
Wed Apr  6 22:03:50 2022 - [info]   /usr/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.1.2 --orig_master_ip=192.168.1.2 --orig_master_port=3306 IN SCRIPT TEST====/sbin/ifconfig ens33:1 down==/sbin/ifconfig ens33:1 192.168.1.200/24===Checking the Status of the script.. OK 
Wed Apr  6 22:03:50 2022 - [info]  OK.
Wed Apr  6 22:03:50 2022 - [warning] shutdown_script is not defined.
Wed Apr  6 22:03:50 2022 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.
[root@mha ~]# 

开启监控

[root@mha ~]# nohup masterha_manager --conf=/etc/masterha/app1.cnf \
>    --remove_dead_master_conf  --ignore_last_failover < /dev/null > \
>    /var/log/masterha/app1/manager.log 2>&1 &
[1] 5738

查看MHA manager是否正常

[root@mha ~]# masterha_check_status --conf=/etc/masterha/app1.cnf 
app1 (pid:5738) is running(0:PING_OK), master:192.168.1.2
[root@mha ~]# 

查看启动日志

[root@mha ~]# tail -20 /var/log/masterha/app1/manager.log Cleaning up test file(s).. done.
Wed Apr  6 22:04:44 2022 - [info] Slaves settings check done.
Wed Apr  6 22:04:44 2022 - [info] 
192.168.1.2(192.168.1.2:3306) (current master)+--192.168.1.3(192.168.1.3:3306)+--192.168.1.4(192.168.1.4:3306)Wed Apr  6 22:04:44 2022 - [info] Checking master_ip_failover_script status:
Wed Apr  6 22:04:44 2022 - [info]   /usr/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.1.2 --orig_master_ip=192.168.1.2 --orig_master_port=3306 IN SCRIPT TEST====/sbin/ifconfig ens33:1 down==/sbin/ifconfig ens33:1 192.168.1.200/24===Checking the Status of the script.. OK 
Wed Apr  6 22:04:44 2022 - [info]  OK.
Wed Apr  6 22:04:44 2022 - [warning] shutdown_script is not defined.
Wed Apr  6 22:04:44 2022 - [info] Set master ping interval 1 seconds.
Wed Apr  6 22:04:44 2022 - [warning] secondary_check_script is not defined. It is highly recommended setting it to check master reachability from two or more routes.
Wed Apr  6 22:04:44 2022 - [info] Starting ping health check on 192.168.1.2(192.168.1.2:3306)..
Wed Apr  6 22:04:44 2022 - [info] Ping(SELECT) succeeded, waiting until MySQL doesn't respond..
[root@mha ~]# 

打开新的日志窗口观察vip和主从是否漂移

[root@mha ~]# tail -0f /var/log/masterha/app1/manager.log Wed Apr  6 22:13:40 2022 - [warning] Got error on MySQL select ping: 2006 (MySQL server has gone away)
Wed Apr  6 22:13:40 2022 - [info] Executing SSH check script: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql/log --output_file=/tmp/save_binary_logs_test --manager_version=0.57 --binlog_prefix=mysql-bin
Wed Apr  6 22:13:40 2022 - [info] HealthCheck: SSH to 192.168.1.2 is reachable.
Wed Apr  6 22:13:41 2022 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.1.2' (111))
Wed Apr  6 22:13:41 2022 - [warning] Connection failed 2 time(s)..
Wed Apr  6 22:13:42 2022 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.1.2' (111))
Wed Apr  6 22:13:42 2022 - [warning] Connection failed 3 time(s)..
Wed Apr  6 22:13:43 2022 - [warning] Got error on MySQL connect: 2003 (Can't connect to MySQL server on '192.168.1.2' (111))
Wed Apr  6 22:13:43 2022 - [warning] Connection failed 4 time(s)..
Wed Apr  6 22:13:43 2022 - [warning] Master is not reachable from health checker!
Wed Apr  6 22:13:43 2022 - [warning] Master 192.168.1.2(192.168.1.2:3306) is not reachable!
Wed Apr  6 22:13:43 2022 - [warning] SSH is reachable.
Wed Apr  6 22:13:43 2022 - [info] Connecting to a master server failed. Reading configuration file /etc/masterha_default.cnf and /etc/masterha/app1.cnf again, and trying to connect to all servers to check server status..
Wed Apr  6 22:13:43 2022 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Wed Apr  6 22:13:43 2022 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Wed Apr  6 22:13:43 2022 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Wed Apr  6 22:13:44 2022 - [info] GTID failover mode = 0
Wed Apr  6 22:13:44 2022 - [info] Dead Servers:
Wed Apr  6 22:13:44 2022 - [info]   192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:44 2022 - [info] Alive Servers:
Wed Apr  6 22:13:44 2022 - [info]   192.168.1.3(192.168.1.3:3306)
Wed Apr  6 22:13:44 2022 - [info]   192.168.1.4(192.168.1.4:3306)
Wed Apr  6 22:13:44 2022 - [info] Alive Slaves:
Wed Apr  6 22:13:44 2022 - [info]   192.168.1.3(192.168.1.3:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:44 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:44 2022 - [info]   192.168.1.4(192.168.1.4:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:44 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:44 2022 - [info] Checking slave configurations..
Wed Apr  6 22:13:44 2022 - [info]  read_only=1 is not set on slave 192.168.1.3(192.168.1.3:3306).
Wed Apr  6 22:13:44 2022 - [info]  read_only=1 is not set on slave 192.168.1.4(192.168.1.4:3306).
Wed Apr  6 22:13:44 2022 - [info] Checking replication filtering settings..
Wed Apr  6 22:13:44 2022 - [info]  Replication filtering check ok.
Wed Apr  6 22:13:44 2022 - [info] Master is down!
Wed Apr  6 22:13:44 2022 - [info] Terminating monitoring script.
Wed Apr  6 22:13:44 2022 - [info] Got exit code 20 (Master dead).
Wed Apr  6 22:13:44 2022 - [info] MHA::MasterFailover version 0.57.
Wed Apr  6 22:13:44 2022 - [info] Starting master failover.
Wed Apr  6 22:13:44 2022 - [info] 
Wed Apr  6 22:13:44 2022 - [info] * Phase 1: Configuration Check Phase..
Wed Apr  6 22:13:44 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] GTID failover mode = 0
Wed Apr  6 22:13:45 2022 - [info] Dead Servers:
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:45 2022 - [info] Checking master reachability via MySQL(double check)...
Wed Apr  6 22:13:45 2022 - [info]  ok.
Wed Apr  6 22:13:45 2022 - [info] Alive Servers:
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.3(192.168.1.3:3306)
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.4(192.168.1.4:3306)
Wed Apr  6 22:13:45 2022 - [info] Alive Slaves:
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.3(192.168.1.3:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:45 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.4(192.168.1.4:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:45 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:45 2022 - [info] Starting Non-GTID based failover.
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] ** Phase 1: Configuration Check Phase completed.
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] * Phase 2: Dead Master Shutdown Phase..
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] Forcing shutdown so that applications never connect to the current master..
Wed Apr  6 22:13:45 2022 - [info] Executing master IP deactivation script:
Wed Apr  6 22:13:45 2022 - [info]   /usr/bin/master_ip_failover --orig_master_host=192.168.1.2 --orig_master_ip=192.168.1.2 --orig_master_port=3306 --command=stopssh --ssh_user=root  IN SCRIPT TEST====/sbin/ifconfig ens33:1 down==/sbin/ifconfig ens33:1 192.168.1.200/24===Disabling the VIP on old master: 192.168.1.2 
SIOCSIFFLAGS: Cannot assign requested address
Wed Apr  6 22:13:45 2022 - [info]  done.
Wed Apr  6 22:13:45 2022 - [warning] shutdown_script is not set. Skipping explicit shutting down of the dead master.
Wed Apr  6 22:13:45 2022 - [info] * Phase 2: Dead Master Shutdown Phase completed.
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] * Phase 3: Master Recovery Phase..
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] * Phase 3.1: Getting Latest Slaves Phase..
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] The latest binary log file/position on all slaves is mysql-bin.000002:154
Wed Apr  6 22:13:45 2022 - [info] Latest slaves (Slaves that received relay log files to the latest):
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.3(192.168.1.3:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:45 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.4(192.168.1.4:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:45 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:45 2022 - [info] The oldest binary log file/position on all slaves is mysql-bin.000002:154
Wed Apr  6 22:13:45 2022 - [info] Oldest slaves:
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.3(192.168.1.3:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:45 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:45 2022 - [info]   192.168.1.4(192.168.1.4:3306)  Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Wed Apr  6 22:13:45 2022 - [info]     Replicating from 192.168.1.2(192.168.1.2:3306)
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] * Phase 3.2: Saving Dead Master's Binlog Phase..
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] Fetching dead master's binary logs..
Wed Apr  6 22:13:45 2022 - [info] Executing command on the dead master 192.168.1.2(192.168.1.2:3306): save_binary_logs --command=save --start_file=mysql-bin.000002  --start_pos=154 --binlog_dir=/data/mysql/log --output_file=/tmp/saved_master_binlog_from_192.168.1.2_3306_20220406221344.binlog --handle_raw_binlog=1 --disable_log_bin=0 --manager_version=0.57Creating /tmp if not exists..    ok.Concat binary/relay logs from mysql-bin.000002 pos 154 to mysql-bin.000002 EOF into /tmp/saved_master_binlog_from_192.168.1.2_3306_20220406221344.binlog ..Binlog Checksum enabledDumping binlog format description event, from position 0 to 154.. ok.No need to dump effective binlog data from /data/mysql/log/mysql-bin.000002 (pos starts 154, filesize 154). Skipping.Binlog Checksum enabled/tmp/saved_master_binlog_from_192.168.1.2_3306_20220406221344.binlog has no effective data events.
Event not exists.
Wed Apr  6 22:13:45 2022 - [info] Additional events were not found from the orig master. No need to save.
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] * Phase 3.3: Determining New Master Phase..
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] Finding the latest slave that has all relay logs for recovering other slaves..
Wed Apr  6 22:13:45 2022 - [info] All slaves received relay logs to the same position. No need to resync each other.
Wed Apr  6 22:13:45 2022 - [info] Searching new master from slaves..
Wed Apr  6 22:13:45 2022 - [info]  Candidate masters from the configuration file:
Wed Apr  6 22:13:45 2022 - [info]  Non-candidate masters:
Wed Apr  6 22:13:45 2022 - [info] New master is 192.168.1.3(192.168.1.3:3306)
Wed Apr  6 22:13:45 2022 - [info] Starting master failover..
Wed Apr  6 22:13:45 2022 - [info] 
From:
192.168.1.2(192.168.1.2:3306) (current master)+--192.168.1.3(192.168.1.3:3306)+--192.168.1.4(192.168.1.4:3306)To:
192.168.1.3(192.168.1.3:3306) (new master)+--192.168.1.4(192.168.1.4:3306)
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] * Phase 3.3: New Master Diff Log Generation Phase..
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info]  This server has all relay logs. No need to generate diff files from the latest slave.
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] * Phase 3.4: Master Log Apply Phase..
Wed Apr  6 22:13:45 2022 - [info] 
Wed Apr  6 22:13:45 2022 - [info] *NOTICE: If any error happens from this phase, manual recovery is needed.
Wed Apr  6 22:13:45 2022 - [info] Starting recovery on 192.168.1.3(192.168.1.3:3306)..
Wed Apr  6 22:13:45 2022 - [info]  This server has all relay logs. Waiting all logs to be applied.. 
Wed Apr  6 22:13:45 2022 - [info]   done.
Wed Apr  6 22:13:45 2022 - [info]  All relay logs were successfully applied.
Wed Apr  6 22:13:45 2022 - [info] Getting new master's binlog name and position..
Wed Apr  6 22:13:45 2022 - [info]  mysql-bin.000002:154
Wed Apr  6 22:13:45 2022 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.1.3', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000002', MASTER_LOG_POS=154, MASTER_USER='hello', MASTER_PASSWORD='xxx';
Wed Apr  6 22:13:45 2022 - [info] Executing master IP activate script:
Wed Apr  6 22:13:45 2022 - [info]   /usr/bin/master_ip_failover --command=start --ssh_user=root --orig_master_host=192.168.1.2 --orig_master_ip=192.168.1.2 --orig_master_port=3306 --new_master_host=192.168.1.3 --new_master_ip=192.168.1.3 --new_master_port=3306 --new_master_user='manager'   --new_master_password=xxx
Unknown option: new_master_user
Unknown option: new_master_passwordIN SCRIPT TEST====/sbin/ifconfig ens33:1 down==/sbin/ifconfig ens33:1 192.168.1.200/24===Enabling the VIP - 192.168.1.200/24 on the new master - 192.168.1.3 
bash: /sbin/ifconfig: No such file or directory
Wed Apr  6 22:13:46 2022 - [info]  OK.
Wed Apr  6 22:13:46 2022 - [info] ** Finished master recovery successfully.
Wed Apr  6 22:13:46 2022 - [info] * Phase 3: Master Recovery Phase completed.
Wed Apr  6 22:13:46 2022 - [info] 
Wed Apr  6 22:13:46 2022 - [info] * Phase 4: Slaves Recovery Phase..
Wed Apr  6 22:13:46 2022 - [info] 
Wed Apr  6 22:13:46 2022 - [info] * Phase 4.1: Starting Parallel Slave Diff Log Generation Phase..
Wed Apr  6 22:13:46 2022 - [info] 
Wed Apr  6 22:13:46 2022 - [info] -- Slave diff file generation on host 192.168.1.4(192.168.1.4:3306) started, pid: 1291. Check tmp log /var/log/masterha/app1/192.168.1.4_3306_20220406221344.log if it takes time..
Wed Apr  6 22:13:47 2022 - [info] 
Wed Apr  6 22:13:47 2022 - [info] Log messages from 192.168.1.4 ...
Wed Apr  6 22:13:47 2022 - [info] 
Wed Apr  6 22:13:46 2022 - [info]  This server has all relay logs. No need to generate diff files from the latest slave.
Wed Apr  6 22:13:47 2022 - [info] End of log messages from 192.168.1.4.
Wed Apr  6 22:13:47 2022 - [info] -- 192.168.1.4(192.168.1.4:3306) has the latest relay log events.
Wed Apr  6 22:13:47 2022 - [info] Generating relay diff files from the latest slave succeeded.
Wed Apr  6 22:13:47 2022 - [info] 
Wed Apr  6 22:13:47 2022 - [info] * Phase 4.2: Starting Parallel Slave Log Apply Phase..
Wed Apr  6 22:13:47 2022 - [info] 
Wed Apr  6 22:13:47 2022 - [info] -- Slave recovery on host 192.168.1.4(192.168.1.4:3306) started, pid: 1293. Check tmp log /var/log/masterha/app1/192.168.1.4_3306_20220406221344.log if it takes time..
Wed Apr  6 22:13:48 2022 - [info] 
Wed Apr  6 22:13:48 2022 - [info] Log messages from 192.168.1.4 ...
Wed Apr  6 22:13:48 2022 - [info] 
Wed Apr  6 22:13:47 2022 - [info] Starting recovery on 192.168.1.4(192.168.1.4:3306)..
Wed Apr  6 22:13:47 2022 - [info]  This server has all relay logs. Waiting all logs to be applied.. 
Wed Apr  6 22:13:47 2022 - [info]   done.
Wed Apr  6 22:13:47 2022 - [info]  All relay logs were successfully applied.
Wed Apr  6 22:13:47 2022 - [info]  Resetting slave 192.168.1.4(192.168.1.4:3306) and starting replication from the new master 192.168.1.3(192.168.1.3:3306)..
Wed Apr  6 22:13:47 2022 - [info]  Executed CHANGE MASTER.
Wed Apr  6 22:13:47 2022 - [info]  Slave started.
Wed Apr  6 22:13:48 2022 - [info] End of log messages from 192.168.1.4.
Wed Apr  6 22:13:48 2022 - [info] -- Slave recovery on host 192.168.1.4(192.168.1.4:3306) succeeded.
Wed Apr  6 22:13:48 2022 - [info] All new slave servers recovered successfully.
Wed Apr  6 22:13:48 2022 - [info] 
Wed Apr  6 22:13:48 2022 - [info] * Phase 5: New master cleanup phase..
Wed Apr  6 22:13:48 2022 - [info] 
Wed Apr  6 22:13:48 2022 - [info] Resetting slave info on the new master..
Wed Apr  6 22:13:48 2022 - [info]  192.168.1.3: Resetting slave info succeeded.
Wed Apr  6 22:13:48 2022 - [info] Master failover to 192.168.1.3(192.168.1.3:3306) completed successfully.
Wed Apr  6 22:13:48 2022 - [info] Deleted server1 entry from /etc/masterha/app1.cnf .
Wed Apr  6 22:13:48 2022 - [info] ----- Failover Report -----app1: MySQL Master failover 192.168.1.2(192.168.1.2:3306) to 192.168.1.3(192.168.1.3:3306) succeededMaster 192.168.1.2(192.168.1.2:3306) is down!Check MHA Manager logs at mha:/var/log/masterha/app1/manager.log for details.Started automated(non-interactive) failover.
Invalidated master IP address on 192.168.1.2(192.168.1.2:3306)
The latest slave 192.168.1.3(192.168.1.3:3306) has all relay logs for recovery.
Selected 192.168.1.3(192.168.1.3:3306) as a new master.
192.168.1.3(192.168.1.3:3306): OK: Applying all logs succeeded.
192.168.1.3(192.168.1.3:3306): OK: Activated master IP address.
192.168.1.4(192.168.1.4:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.1.4(192.168.1.4:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.1.3(192.168.1.3:3306)
192.168.1.3(192.168.1.3:3306): Resetting slave info succeeded.
Master failover to 192.168.1.3(192.168.1.3:3306) completed successfully.
Wed Apr  6 22:13:48 2022 - [info] Sending mail..
sh: /usr/local/send_report: No such file or directory
Wed Apr  6 22:13:48 2022 - [error][/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm, ln2066] Failed to send mail with return code 127:0

此时显示,vip和master都漂移成功。到次MHA全部搭建完毕。

搭建ceph集群

主机名IP
ceph01192.168.1. 6
ceph02192.168.1.7
ceph03192.168.1.8

这三台服务器分别添加三块100G硬盘

根新主机时间

在所有主机上都需要根新

[root@ceph01 ~]# ntpdate ntp1.aliyun.com6 Apr 15:34:50 ntpdate[1467]: step time server 120.25.115.20 offset -28798.923817 sec
[root@ceph01 ~]# 

做个计划任务

[root@ceph01 ~]# crontab -l
30 * * * * ntpdate ntp1.aliyun.com

修改host文件

首先需要修改主机名

[root@ceph01 ~]# vim /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.6 ceph01
192.168.1.7 ceoh02
192.168.1.8 ceph03                   

做SSH免密登陆

需要在所有的主机上都创建密钥

[root@ceph01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): #这里什么都不输入,直接回车
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): #这里什么都不输入,直接回车
Enter same passphrase again:  #这里什么都不输入,直接回车
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is: #这里什么都不输入,直接回车
SHA256:LFxoaPQyaj5aU37aPwiuXKJ/R+/I8HG7eNPTmvYHHHs root@ceph01
The key's randomart image is: #这里什么都不输入,直接回车
+---[RSA 2048]----+
|    .            |
|   . o .         |
|    = + .        |
|   o = o    .    |
|  o . o S  . o   |
| o o. ..    + E  |
|  *.+ooo.. . o   |
| = =oBo==.+.. .  |
|o.+o..*+==o+..   |
+----[SHA256]-----+
[root@ceph01 ~]# 

将密钥发送到所有主机

[root@ceph01 ~]# for i in 6 7 8;do ssh-copy-id 192.168.1.$i;done

上传软件包并解压

链接:https://pan.baidu.com/s/1DwCtwDd5_tv4uVQE-PxITQ
提取码:mlzc
–来自百度网盘超级会员V2的分享

[root@ceph01 ~]# tar -zxvf ceph-12.2.12.tar.gz 

配置ceph的yum源

[root@ceph01 ~]# vim /etc/yum.repos.d/ceph.repo [ceph]
name=ceph
baseurl=file:///root/ceph
enabled=1
gpgcheck=0

将解压的软件包和yum源发送

[root@ceph01 ~]# for i in 7 8;do scp -r /root/ceph root@192.168.1.$i:~;done
[root@ceph01 ~]# for i in 7 8;do scp -r /etc/yum.repos.d/ root@192.168.1.$i:/etc/;done

安装epel-release(所有节点)

[root@ceph01 ~]# yum -y install epel-release yum-plugin-priorities yum-utils ntpdate

在所有的主机上部署ceph

[root@ceph01 ~]# yum -y install ceph-deploy ceph ceph-radosgw snappy leveldb gdisk python-argparse gperftools-libs

在管理节点上部署服务

注:也可以同时在ceph02,ceph03上部署mon,实现高可用,生产环境至少3个mon独立
在/etc/ceph目录操作,创建一个新集群,并设置ceph01为mon节点

[root@ceph01 ~]# cd /etc/ceph/
[root@ceph01 ceph]# ceph-deploy new ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f8dc29cce60>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f8dc2351368>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph01']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: /usr/sbin/ip link show
[ceph01][INFO  ] Running command: /usr/sbin/ip addr show
[ceph01][DEBUG ] IP addresses found: [u'192.168.1.6']
[ceph_deploy.new][DEBUG ] Resolving host ceph01
[ceph_deploy.new][DEBUG ] Monitor ceph01 at 192.168.1.6
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph01']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.1.6']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@ceph01 ceph]# 

执行完毕后,可以看到/etc/ceph目录中生成了三个文件,其中有一个ceph配置文件可以做各种参数优化。(注意,在osd进程生成并挂载使用后,想修改配置需要使用命令行工具,修改配置文件是无效的,所以需要提前规划好优化的参数。),一个是监视器秘钥环。

[root@ceph01 ceph]# ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring  rbdmap

修改副本数

配置文件的默认副本数从3改成2,这样只有两个osd也能达到active+clean状态

[root@ceph01 ceph]# vim ceph.conf [global]
fsid = be4ef425-886e-44fb-90bc-77d2575fa0e1
mon_initial_members = ceph01
mon_host = 192.168.1.6
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2  #添加这一行

安装ceph monitor

[root@ceph01 ceph]# ceph-deploy mon create ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ff3d15ede18>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0x7ff3d185f488>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph01
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph01 ...
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.5.1804 Core
[ceph01][DEBUG ] determining if provided host has same hostname in remote
[ceph01][DEBUG ] get remote short hostname
[ceph01][DEBUG ] deploying mon to ceph01
[ceph01][DEBUG ] get remote short hostname
[ceph01][DEBUG ] remote hostname: ceph01
[ceph01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph01][DEBUG ] create the mon path if it does not exist
[ceph01][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph01/done
[ceph01][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph01/done
[ceph01][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph01.mon.keyring
[ceph01][DEBUG ] create the monitor keyring file
[ceph01][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ceph01 --keyring /var/lib/ceph/tmp/ceph-ceph01.mon.keyring --setuser 167 --setgroup 167
[ceph01][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph01.mon.keyring
[ceph01][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph01][DEBUG ] create the init path if it does not exist
[ceph01][INFO  ] Running command: systemctl enable ceph.target
[ceph01][INFO  ] Running command: systemctl enable ceph-mon@ceph01
[ceph01][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph01.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph01][INFO  ] Running command: systemctl start ceph-mon@ceph01
[ceph01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph01.asok mon_status
[ceph01][DEBUG ] ********************************************************************************
[ceph01][DEBUG ] status for monitor: mon.ceph01
[ceph01][DEBUG ] {
[ceph01][DEBUG ]   "election_epoch": 3, 
[ceph01][DEBUG ]   "extra_probe_peers": [], 
[ceph01][DEBUG ]   "feature_map": {
[ceph01][DEBUG ]     "mon": {
[ceph01][DEBUG ]       "group": {
[ceph01][DEBUG ]         "features": "0x3ffddff8eeacfffb", 
[ceph01][DEBUG ]         "num": 1, 
[ceph01][DEBUG ]         "release": "luminous"
[ceph01][DEBUG ]       }
[ceph01][DEBUG ]     }
[ceph01][DEBUG ]   }, 
[ceph01][DEBUG ]   "features": {
[ceph01][DEBUG ]     "quorum_con": "4611087853746454523", 
[ceph01][DEBUG ]     "quorum_mon": [
[ceph01][DEBUG ]       "kraken", 
[ceph01][DEBUG ]       "luminous"
[ceph01][DEBUG ]     ], 
[ceph01][DEBUG ]     "required_con": "153140804152475648", 
[ceph01][DEBUG ]     "required_mon": [
[ceph01][DEBUG ]       "kraken", 
[ceph01][DEBUG ]       "luminous"
[ceph01][DEBUG ]     ]
[ceph01][DEBUG ]   }, 
[ceph01][DEBUG ]   "monmap": {
[ceph01][DEBUG ]     "created": "2022-04-06 16:16:46.287081", 
[ceph01][DEBUG ]     "epoch": 1, 
[ceph01][DEBUG ]     "features": {
[ceph01][DEBUG ]       "optional": [], 
[ceph01][DEBUG ]       "persistent": [
[ceph01][DEBUG ]         "kraken", 
[ceph01][DEBUG ]         "luminous"
[ceph01][DEBUG ]       ]
[ceph01][DEBUG ]     }, 
[ceph01][DEBUG ]     "fsid": "be4ef425-886e-44fb-90bc-77d2575fa0e1", 
[ceph01][DEBUG ]     "modified": "2022-04-06 16:16:46.287081", 
[ceph01][DEBUG ]     "mons": [
[ceph01][DEBUG ]       {
[ceph01][DEBUG ]         "addr": "192.168.1.6:6789/0", 
[ceph01][DEBUG ]         "name": "ceph01", 
[ceph01][DEBUG ]         "public_addr": "192.168.1.6:6789/0", 
[ceph01][DEBUG ]         "rank": 0
[ceph01][DEBUG ]       }
[ceph01][DEBUG ]     ]
[ceph01][DEBUG ]   }, 
[ceph01][DEBUG ]   "name": "ceph01", 
[ceph01][DEBUG ]   "outside_quorum": [], 
[ceph01][DEBUG ]   "quorum": [
[ceph01][DEBUG ]     0
[ceph01][DEBUG ]   ], 
[ceph01][DEBUG ]   "rank": 0, 
[ceph01][DEBUG ]   "state": "leader", 
[ceph01][DEBUG ]   "sync_provider": []
[ceph01][DEBUG ] }
[ceph01][DEBUG ] ********************************************************************************
[ceph01][INFO  ] monitor: mon.ceph01 is running
[ceph01][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph01.asok mon_status
[root@ceph01 ceph]# 

收集节点的keyring文件

[root@ceph01 ceph]# ceph-deploy gatherkeys ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy gatherkeys ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd4e783fbd8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph01']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x7fd4e7a93a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory /tmp/tmpcFRJma
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] get remote short hostname
[ceph01][DEBUG ] fetch remote file
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.ceph01.asok mon_status
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get client.admin
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get-or-create client.admin osd allow * mds allow * mon allow * mgr allow *
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get client.bootstrap-mds
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get-or-create client.bootstrap-mds mon allow profile bootstrap-mds
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get client.bootstrap-mgr
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get client.bootstrap-osd
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get-or-create client.bootstrap-osd mon allow profile bootstrap-osd
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get client.bootstrap-rgw
[ceph01][INFO  ] Running command: /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-ceph01/keyring auth get-or-create client.bootstrap-rgw mon allow profile bootstrap-rgw
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring' already exists
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO  ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO  ] Destroy temp directory /tmp/tmpcFRJma
[root@ceph01 ceph]# 

查看一下

[root@ceph01 ceph]# ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log  rbdmap
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring

可以看见ceph.client.admin.keyring已经有了

查看密钥

[root@ceph01 ceph]# cat ceph.client.admin.keyring 
[client.admin]key = AQDgyk1i5gPqMRAAQa0dPgyF71C5Psiq/ebuQQ==
[root@ceph01 ceph]# 

部署osd服务

使用ceph自动分区

都需要执行一般
ceph-deploy disk zap ceph01 /dev/sdb
ceph-deploy disk zap ceph02 /dev/sdb
ceph-deploy disk zap ceph03 /dev/sdb
这三条命令修需要在ceph01上执行

[root@ceph01 ceph]# ceph-deploy disk zap ceph01 /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy disk zap ceph01 /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : zap
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1565785290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ceph01
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f15659db9b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : ['/dev/sdb']
[ceph_deploy.osd][DEBUG ] zapping /dev/sdb on ceph01
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph01][DEBUG ] zeroing last few blocks of device
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: /usr/sbin/ceph-volume lvm zap /dev/sdb
[ceph01][DEBUG ] --> Zapping: /dev/sdb
[ceph01][DEBUG ] --> --destroy was not specified, but zapping a whole device will remove the partition table
[ceph01][DEBUG ] Running command: wipefs --all /dev/sdb
[ceph01][DEBUG ] Running command: dd if=/dev/zero of=/dev/sdb bs=1M count=10
[ceph01][DEBUG ] --> Zapping successful for: <Raw Device: /dev/sdb>
You have mail in /var/spool/mail/root
[root@ceph01 ceph]# 

添加osd节点

[root@ceph01 ceph]# ceph-deploy osd create ceph01 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph01 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f95b1d403b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph01
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f95b1f91938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph01
[ceph01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph01][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph01][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2d5cfb18-85fc-49a3-b829-cce6b7e39e12
[ceph01][DEBUG ] Running command: vgcreate --force --yes ceph-8434389e-41f0-4d97-aec0-386ceae136f5 /dev/sdb
[ceph01][DEBUG ]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph01][DEBUG ]  stdout: Volume group "ceph-8434389e-41f0-4d97-aec0-386ceae136f5" successfully created
[ceph01][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12 ceph-8434389e-41f0-4d97-aec0-386ceae136f5
[ceph01][DEBUG ]  stdout: Logical volume "osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12" created.
[ceph01][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph01][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph01][DEBUG ] Running command: restorecon /var/lib/ceph/osd/ceph-0
[ceph01][DEBUG ] Running command: chown -h ceph:ceph /dev/ceph-8434389e-41f0-4d97-aec0-386ceae136f5/osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12
[ceph01][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph01][DEBUG ] Running command: ln -s /dev/ceph-8434389e-41f0-4d97-aec0-386ceae136f5/osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12 /var/lib/ceph/osd/ceph-0/block
[ceph01][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph01][DEBUG ]  stderr: got monmap epoch 1
[ceph01][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAjzE1iLK8LLBAAdcsplCzkGVOD8khpaomgzw==
[ceph01][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph01][DEBUG ] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQAjzE1iLK8LLBAAdcsplCzkGVOD8khpaomgzw== with 0 caps)
[ceph01][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph01][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph01][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 2d5cfb18-85fc-49a3-b829-cce6b7e39e12 --setuser ceph --setgroup ceph
[ceph01][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph01][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph01][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-8434389e-41f0-4d97-aec0-386ceae136f5/osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12 --path /var/lib/ceph/osd/ceph-0
[ceph01][DEBUG ] Running command: ln -snf /dev/ceph-8434389e-41f0-4d97-aec0-386ceae136f5/osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12 /var/lib/ceph/osd/ceph-0/block
[ceph01][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph01][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph01][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph01][DEBUG ] Running command: systemctl enable ceph-volume@lvm-0-2d5cfb18-85fc-49a3-b829-cce6b7e39e12
[ceph01][DEBUG ]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-2d5cfb18-85fc-49a3-b829-cce6b7e39e12.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph01][DEBUG ] Running command: systemctl enable --runtime ceph-osd@0
[ceph01][DEBUG ]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph01][DEBUG ] Running command: systemctl start ceph-osd@0
[ceph01][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 0
[ceph01][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb
[ceph01][INFO  ] checking OSD status...
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph01 is now ready for osd use.
[root@ceph01 ceph]# 
[root@ceph01 ceph]# ceph-deploy osd create ceph02 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph02 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa6d55563b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph02
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fa6d57a7938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph02
[ceph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph02][WARNIN] osd keyring does not exist yet, creating one
[ceph02][DEBUG ] create a keyring file
[ceph02][DEBUG ] find the location of an executable
[ceph02][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph02][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph02][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9e415d1f-71b4-4888-bd55-3aafd6f57536
[ceph02][DEBUG ] Running command: vgcreate --force --yes ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123 /dev/sdb
[ceph02][DEBUG ]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph02][DEBUG ]  stdout: Volume group "ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123" successfully created
[ceph02][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536 ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123
[ceph02][DEBUG ]  stdout: Logical volume "osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536" created.
[ceph02][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph02][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[ceph02][DEBUG ] Running command: restorecon /var/lib/ceph/osd/ceph-1
[ceph02][DEBUG ] Running command: chown -h ceph:ceph /dev/ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123/osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536
[ceph02][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph02][DEBUG ] Running command: ln -s /dev/ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123/osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536 /var/lib/ceph/osd/ceph-1/block
[ceph02][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[ceph02][DEBUG ]  stderr: got monmap epoch 1
[ceph02][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQBczE1iamDkJxAAFObunS34O263sXaAunkoCg==
[ceph02][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[ceph02][DEBUG ] added entity osd.1 auth auth(auid = 18446744073709551615 key=AQBczE1iamDkJxAAFObunS34O263sXaAunkoCg== with 0 caps)
[ceph02][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[ceph02][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[ceph02][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 9e415d1f-71b4-4888-bd55-3aafd6f57536 --setuser ceph --setgroup ceph
[ceph02][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph02][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph02][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123/osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536 --path /var/lib/ceph/osd/ceph-1
[ceph02][DEBUG ] Running command: ln -snf /dev/ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123/osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536 /var/lib/ceph/osd/ceph-1/block
[ceph02][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[ceph02][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph02][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph02][DEBUG ] Running command: systemctl enable ceph-volume@lvm-1-9e415d1f-71b4-4888-bd55-3aafd6f57536
[ceph02][DEBUG ]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-9e415d1f-71b4-4888-bd55-3aafd6f57536.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph02][DEBUG ] Running command: systemctl enable --runtime ceph-osd@1
[ceph02][DEBUG ]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph02][DEBUG ] Running command: systemctl start ceph-osd@1
[ceph02][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 1
[ceph02][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb
[ceph02][INFO  ] checking OSD status...
[ceph02][DEBUG ] find the location of an executable
[ceph02][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph02 is now ready for osd use.
[root@ceph01 ceph]# 
[root@ceph01 ceph]# ceph-deploy osd create ceph03 --data /dev/sdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph03 --data /dev/sdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7cd922a3b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph03
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f7cd947b938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sdb
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph03
[ceph03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph03][WARNIN] osd keyring does not exist yet, creating one
[ceph03][DEBUG ] create a keyring file
[ceph03][DEBUG ] find the location of an executable
[ceph03][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sdb
[ceph03][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph03][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 79050ca1-6f18-4413-8276-985d5c79e7df
[ceph03][DEBUG ] Running command: vgcreate --force --yes ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d /dev/sdb
[ceph03][DEBUG ]  stdout: Physical volume "/dev/sdb" successfully created.
[ceph03][DEBUG ]  stdout: Volume group "ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d" successfully created
[ceph03][DEBUG ] Running command: lvcreate --yes -l 100%FREE -n osd-block-79050ca1-6f18-4413-8276-985d5c79e7df ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d
[ceph03][DEBUG ]  stdout: Logical volume "osd-block-79050ca1-6f18-4413-8276-985d5c79e7df" created.
[ceph03][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[ceph03][DEBUG ] Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[ceph03][DEBUG ] Running command: restorecon /var/lib/ceph/osd/ceph-2
[ceph03][DEBUG ] Running command: chown -h ceph:ceph /dev/ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d/osd-block-79050ca1-6f18-4413-8276-985d5c79e7df
[ceph03][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph03][DEBUG ] Running command: ln -s /dev/ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d/osd-block-79050ca1-6f18-4413-8276-985d5c79e7df /var/lib/ceph/osd/ceph-2/block
[ceph03][DEBUG ] Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[ceph03][DEBUG ]  stderr: got monmap epoch 1
[ceph03][DEBUG ] Running command: ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQCjzE1iwT/UExAAxlLPej7ulm+8hFIG4Te/Jw==
[ceph03][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[ceph03][DEBUG ] added entity osd.2 auth auth(auid = 18446744073709551615 key=AQCjzE1iwT/UExAAxlLPej7ulm+8hFIG4Te/Jw== with 0 caps)
[ceph03][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[ceph03][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[ceph03][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 79050ca1-6f18-4413-8276-985d5c79e7df --setuser ceph --setgroup ceph
[ceph03][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sdb
[ceph03][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph03][DEBUG ] Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d/osd-block-79050ca1-6f18-4413-8276-985d5c79e7df --path /var/lib/ceph/osd/ceph-2
[ceph03][DEBUG ] Running command: ln -snf /dev/ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d/osd-block-79050ca1-6f18-4413-8276-985d5c79e7df /var/lib/ceph/osd/ceph-2/block
[ceph03][DEBUG ] Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[ceph03][DEBUG ] Running command: chown -R ceph:ceph /dev/dm-2
[ceph03][DEBUG ] Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph03][DEBUG ] Running command: systemctl enable ceph-volume@lvm-2-79050ca1-6f18-4413-8276-985d5c79e7df
[ceph03][DEBUG ]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-79050ca1-6f18-4413-8276-985d5c79e7df.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph03][DEBUG ] Running command: systemctl enable --runtime ceph-osd@2
[ceph03][DEBUG ]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph03][DEBUG ] Running command: systemctl start ceph-osd@2
[ceph03][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 2
[ceph03][DEBUG ] --> ceph-volume lvm create successful for: /dev/sdb
[ceph03][INFO  ] checking OSD status...
[ceph03][DEBUG ] find the location of an executable
[ceph03][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph03 is now ready for osd use.
[root@ceph01 ceph]# 

查看osd状态

[root@ceph01 ceph]# ceph-deploy osd list ceph01 ceph02 ceph03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd list ceph01 ceph02 ceph03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f081282c3b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  host                          : ['ceph01', 'ceph02', 'ceph03']
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f0812a7d938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph01...
[ceph01][DEBUG ] find the location of an executable
[ceph01][INFO  ] Running command: /usr/sbin/ceph-volume lvm list
[ceph01][DEBUG ] 
[ceph01][DEBUG ] 
[ceph01][DEBUG ] ====== osd.0 =======
[ceph01][DEBUG ] 
[ceph01][DEBUG ]   [block]    /dev/ceph-8434389e-41f0-4d97-aec0-386ceae136f5/osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12
[ceph01][DEBUG ] 
[ceph01][DEBUG ]       type                      block
[ceph01][DEBUG ]       osd id                    0
[ceph01][DEBUG ]       cluster fsid              b5927d07-90ea-4db7-b219-c239b38c8729
[ceph01][DEBUG ]       cluster name              ceph
[ceph01][DEBUG ]       osd fsid                  2d5cfb18-85fc-49a3-b829-cce6b7e39e12
[ceph01][DEBUG ]       encrypted                 0
[ceph01][DEBUG ]       cephx lockbox secret      
[ceph01][DEBUG ]       block uuid                W5FWGo-RCEX-A0cf-YYtD-G4aH-JLHn-qMiroq
[ceph01][DEBUG ]       block device              /dev/ceph-8434389e-41f0-4d97-aec0-386ceae136f5/osd-block-2d5cfb18-85fc-49a3-b829-cce6b7e39e12
[ceph01][DEBUG ]       vdo                       0
[ceph01][DEBUG ]       crush device class        None
[ceph01][DEBUG ]       devices                   /dev/sdb
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph02...
[ceph02][DEBUG ] find the location of an executable
[ceph02][INFO  ] Running command: /usr/sbin/ceph-volume lvm list
[ceph02][DEBUG ] 
[ceph02][DEBUG ] 
[ceph02][DEBUG ] ====== osd.1 =======
[ceph02][DEBUG ] 
[ceph02][DEBUG ]   [block]    /dev/ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123/osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536
[ceph02][DEBUG ] 
[ceph02][DEBUG ]       type                      block
[ceph02][DEBUG ]       osd id                    1
[ceph02][DEBUG ]       cluster fsid              b5927d07-90ea-4db7-b219-c239b38c8729
[ceph02][DEBUG ]       cluster name              ceph
[ceph02][DEBUG ]       osd fsid                  9e415d1f-71b4-4888-bd55-3aafd6f57536
[ceph02][DEBUG ]       encrypted                 0
[ceph02][DEBUG ]       cephx lockbox secret      
[ceph02][DEBUG ]       block uuid                T4wna0-Nm9s-Jpes-d9M9-w2f8-IHda-dF7UZL
[ceph02][DEBUG ]       block device              /dev/ceph-c47f97d6-c1ce-4fc9-b9ea-f40fe140b123/osd-block-9e415d1f-71b4-4888-bd55-3aafd6f57536
[ceph02][DEBUG ]       vdo                       0
[ceph02][DEBUG ]       crush device class        None
[ceph02][DEBUG ]       devices                   /dev/sdb
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph03][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.osd][DEBUG ] Listing disks on ceph03...
[ceph03][DEBUG ] find the location of an executable
[ceph03][INFO  ] Running command: /usr/sbin/ceph-volume lvm list
[ceph03][DEBUG ] 
[ceph03][DEBUG ] 
[ceph03][DEBUG ] ====== osd.2 =======
[ceph03][DEBUG ] 
[ceph03][DEBUG ]   [block]    /dev/ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d/osd-block-79050ca1-6f18-4413-8276-985d5c79e7df
[ceph03][DEBUG ] 
[ceph03][DEBUG ]       type                      block
[ceph03][DEBUG ]       osd id                    2
[ceph03][DEBUG ]       cluster fsid              b5927d07-90ea-4db7-b219-c239b38c8729
[ceph03][DEBUG ]       cluster name              ceph
[ceph03][DEBUG ]       osd fsid                  79050ca1-6f18-4413-8276-985d5c79e7df
[ceph03][DEBUG ]       encrypted                 0
[ceph03][DEBUG ]       cephx lockbox secret      
[ceph03][DEBUG ]       block uuid                vNofyZ-VNwx-6nFz-6m2S-7zl8-zLRq-6Rtwa5
[ceph03][DEBUG ]       block device              /dev/ceph-bf23c6bc-a2bf-4638-839e-1b0d14870a4d/osd-block-79050ca1-6f18-4413-8276-985d5c79e7df
[ceph03][DEBUG ]       vdo                       0
[ceph03][DEBUG ]       crush device class        None
[ceph03][DEBUG ]       devices                   /dev/sdb
[root@ceph01 ceph]# 

部署mgr管理服务

在管理主机上部署mgr管理服务,也可以同时在ceph02,ceph03上部署mgr,实现高可用。

[root@ceph01 ceph]# ceph-deploy mgr create ceph01
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph01', 'ceph01')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f50cec376c8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0x7f50cf2a31b8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph01:ceph01
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph01
[ceph01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph01][WARNIN] mgr keyring does not exist yet, creating one
[ceph01][DEBUG ] create a keyring file
[ceph01][DEBUG ] create path recursively if it doesn't exist
[ceph01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph01 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph01/keyring
[ceph01][INFO  ] Running command: systemctl enable ceph-mgr@ceph01
[ceph01][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph01.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph01][INFO  ] Running command: systemctl start ceph-mgr@ceph01
[ceph01][INFO  ] Running command: systemctl enable ceph.target
[root@ceph01 ceph]# 

统一集群配置

[root@ceph01 ceph]# ceph-deploy admin ceph01 ceph02 ceph03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph01 ceph02 ceph03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f43876f4368>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph01', 'ceph02', 'ceph03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x7f4387f962a8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph01
[ceph01][DEBUG ] connected to host: ceph01 
[ceph01][DEBUG ] detect platform information from remote host
[ceph01][DEBUG ] detect machine type
[ceph01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph02
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph03
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
You have mail in /var/spool/mail/root
[root@ceph01 ceph]# 

各节点修改ceph.client.admin.keyring权限

在所有节点上都执行这条命令

[root@ceph01 ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring 

部署mds服务

安装mds

[root@ceph01 ceph]# ceph-deploy mds create ceph02 ceph03
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph02 ceph03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f13395a8290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mds at 0x7f1339803f50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           : [('ceph02', 'ceph02'), ('ceph03', 'ceph03')]
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph02:ceph02 ceph03:ceph03
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph02
[ceph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph02][WARNIN] mds keyring does not exist yet, creating one
[ceph02][DEBUG ] create a keyring file
[ceph02][DEBUG ] create path if it doesn't exist
[ceph02][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph02 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph02/keyring
[ceph02][INFO  ] Running command: systemctl enable ceph-mds@ceph02
[ceph02][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph02.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph02][INFO  ] Running command: systemctl start ceph-mds@ceph02
[ceph02][INFO  ] Running command: systemctl enable ceph.target
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.5.1804 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph03
[ceph03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph03][WARNIN] mds keyring does not exist yet, creating one
[ceph03][DEBUG ] create a keyring file
[ceph03][DEBUG ] create path if it doesn't exist
[ceph03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph03 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph03/keyring
[ceph03][INFO  ] Running command: systemctl enable ceph-mds@ceph03
[ceph03][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph03.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph03][INFO  ] Running command: systemctl start ceph-mds@ceph03
[ceph03][INFO  ] Running command: systemctl enable ceph.target
[root@ceph01 ceph]# 

查看mds服务

[root@ceph01 ceph]# ceph mds stat
, 2 up:standby

查看集群状态

[root@ceph01 ceph]# ceph -scluster:id:     b5927d07-90ea-4db7-b219-c239b38c8729health: HEALTH_OKservices:mon: 1 daemons, quorum ceph01mgr: ceph01(active)osd: 3 osds: 3 up, 3 indata:pools:   0 pools, 0 pgsobjects: 0 objects, 0Busage:   3.01GiB used, 297GiB / 300GiB availpgs:     [root@ceph01 ceph]# 

现在开始创建ceph文件系统

我们要想让web服务器进行数据共享,就要创建文件系统,不能使用盘的方式,盘是单独的,不能进行共享数据,只有文件系统才能进行数据的共享

[root@ceph01 ceph]# ceph fs ls
No filesystems enabled   #此时显示没有ceph文件系统

创建存储池

少于 5 个 OSD 时可把 pg_num 设置为 128
OSD 数量在 5 到 10 个时,可把 pg_num 设置为 512
OSD 数量在 10 到 50 个时,可把 pg_num 设置为 4096
OSD 数量大于 50 时,你得理解权衡方法、以及如何自己计算 pg_num 取值
自己计算 pg_num 取值时可借助 pgcalc 工具
https://ceph.com/pgcalc/

[root@ceph01 ceph]# ceph osd pool create cephfs_data 128
pool 'cephfs_data' created
[root@ceph01 ceph]# ceph osd pool create cephfs_metadata 128
pool 'cephfs_metadata' created

创建文件系统

给刚才创建的2个存储池创建文件系统

[root@ceph01 ceph]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1

查看ceph文件系统

[root@ceph01 ceph]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

查看mds节点状态

[root@ceph01 ceph]# ceph mds stat
cephfs-1/1/1 up  {0=ceph03=up:active}, 1 up:standby

cephfs-1/1/1 up {0=cong13=up:active}, 1 up:standby
active是活跃的,另1个是处于热备份的状态
(此时ceph集群搭建完毕)

备份mysql数据到ceph

如果要备份数据到ceph我们需要使用到ceph的RBD
(同样的步骤创建test2和test3进行挂载到mysql-02和mysql-03上即可)

创建ceph的RBD

首先检查一下linux内核是否支持RBD

[root@ceph01 ceph]# modprobe rbd

如果这里有报错,就需要先升级内核

[root@ceph01 ceph]# lsmod | grep rbd
rbd                    83728  0 
libceph               301687  1 rbd

创建rbd存储池

[root@ceph01 ceph]# ceph osd pool create rbd 64
pool 'rbd' created

注:
通过ceph -s查看状态
HEALTH_WARN:too few PGs per OSD,提示PG太少
PG计算方式
total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count
例如,当前ceph集群是9个osd,3副本,1个默认的rbd pool
所以PG计算结果为300,一般把这个值设置为与计算结果最接近的2的幂数,跟300比较接近的是256
查看当前的PG值
#ceph osd pool get rbd pg_num
pg_num: 64
#ceph osd pool get rbd pgp_num
pgp_num: 64
手动设置pg数量
#ceph osd pool set rbd pg_num 256
set pool 0 pg_num to 256
#ceph osd pool set rbd pgp_num 256
set pool 0 pgp_num to 256

创建指定大小的块设备作为磁盘文件

[root@ceph01 ceph]# rbd create --size 102400 rbd/test1

查看test1的信息

[root@ceph01 ceph]# rbd info test1
rbd image 'test1':size 100GiB in 25600 objectsorder 22 (4MiB objects)block_name_prefix: rbd_data.5e2e6b8b4567format: 2features: layering, exclusive-lock, object-map, fast-diff, deep-flattenflags: create_timestamp: Thu Apr  7 18:40:22 2022
[root@ceph01 ceph]# 

映射块设备,即用rbd把镜像名映射为内核模块

先安装软件包

[root@mysql-01 ~]# yum -y install ceph-common

然后在映射

[root@mysql-01 ~]# rbd feature disable test1 object-map fast-diff deep-flatten

查看一下

[root@mysql-01 ~]# rbd map test1
/dev/rbd0
[root@mysql-01 ~]# ls /dev/rbd0
/dev/rbd0

创建挂载目录

[root@mysql-01 ~]# mkdir -p /data/mysql/databackup

格式化分区

[root@mysql-01 ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0              isize=512    agcount=16, agsize=1638400 blks=                       sectsz=512   attr=2, projid32bit=1=                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26214400, imaxpct=25=                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=12800, version=2=                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

挂载

[root@mysql-01 ~]# mount /dev/rbd0 /data/mysql/databackup/
[root@mysql-01 ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        96G  9.6G   86G  11% /
devtmpfs                devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs                   tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs                   tmpfs     1.9G   12M  1.9G   1% /run
tmpfs                   tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/sr0                iso9660   4.2G  4.2G     0 100% /mnt/cdrom
/dev/sda1               xfs       497M  135M  362M  28% /boot
tmpfs                   tmpfs     378M     0  378M   0% /run/user/0
/dev/rbd0               xfs       100G   33M  100G   1% /data/mysql/databackup

写入数据测试

[root@mysql-01 ~]# dd if=/dev/zero of=/data/mysql/databackup/file bs=100M count=1 oflag=direct
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 0.435444 s, 241 MB/s

查看一下

[root@mysql-01 ~]# rados df
POOL_NAME       USED    OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD      WR_OPS WR      
cephfs_data          0B       0      0      0                  0       0        0      0      0B    608  133MiB 
cephfs_metadata 3.76MiB      22      0     44                  0       0        0     13   41KiB    103 3.79MiB 
rbd              154MiB      59      0    118                  0       0        0    249 1.50MiB    374  153MiB total_objects    81
total_used       3.32GiB
total_avail      297GiB
total_space      300GiB

可以看到我写了100M数据,ceph的rbd pool相应的使用了100M的数据,也就是对/data/cephrbd目录的操作将会直接写到ceph集群的rbd这个pool中,然后写到ceph的osd上。

安装ansible

首先把主机名改了

配置yum源

[root@ansible ~]# vim /etc/yum.repos.d/ansible.repo [ansible]
name=ansible
baseurl=file:///root/ansible
enabled=1
gpgcheck=0

上传软件包

链接:https://pan.baidu.com/s/1VqNbIUQalhLMF9UAl4V7SA
提取码:83o1
–来自百度网盘超级会员V2的分享

[root@ansible ~]# ls
anaconda-ks.cfg  ansible.tar.gz
[root@ansible ~]# tar -zxvf ansible.tar.gz 

安装ansible

[root@ansible ~]# yum -y install ansible

修改主机时间

修改所有主机的时间

[root@ansible ~]# ntpdate ntp1.aliyun.com7 Apr 14:09:38 ntpdate[10997]: step time server 120.25.115.20 offset -28799.997873 sec

配置主机清单

主机名IP
ansible192.168.1.9
nginx-01192.168.1.10
nginx-02192.168.1.11
nginx-03192.168.1.13
apache192.168.1.12
[root@ansible ~]# vim /etc/ansible/hosts # This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
#   - Comments begin with the '#' character
#   - Blank lines are ignored
#   - Groups of hosts are delimited by [header] elements
#   - You can enter hostnames or ip addresses
#   - A hostname/ip can be a member of multiple groups# Ex 1: Ungrouped hosts, specify before any group headers.## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10# Ex 2: A collection of hosts belonging to the 'webservers' group## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110# If you have multiple hosts following a pattern you can specify
# them like this:## www[001:006].example.com# Ex 3: A collection of database servers in the 'dbservers' group## [dbservers]
## 
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57# Here's another example of host ranges, this time there are no
# leading 0s:## db-[99:101]-node.example.com
[web01]
nginx-01 ansible_ssh_host=192.168.1.10 ansible_ssh_port=22 ansible_ssh_user=root ansible_ssh_pass=1
nginx-02 ansible_ssh_host=192.168.1.11 ansible_ssh_port=22 ansible_ssh_user=root ansible_ssh_pass=1[web02]
apache ansible_ssh_host=192.168.1.12 ansible_ssh_port=22 ansible_ssh_user=root ansible_ssh_pass=1

测试一下

这里测试之前需要先ssh到要控制的主机,然后在测试(或者修改配置文件,将配置文件/etc/ansible/ansible.cfg中host_key_checking = False取消注释即可)

[root@ansible ~]# ansible -i /etc/ansible/hosts web01 -m ping
nginx-01 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "pong"
}
nginx-02 | SUCCESS => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "ping": "pong"
}
[root@ansible ~]# 

配置免密访问

做免密的作用是,写明文的密码在配置文件中不安全。(为了安全采用密钥对的方式进行)

[root@ansible ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:xCN4+Do8ZSNBiOqVwXUrTMbdQtvy6u/HVpBPUPVGZj0 root@ansible
The key's randomart image is:
+---[RSA 2048]----+
| ...++oo.   ....=|
|. .+++.++. .   Eo|
|.   *oo+=.  o   +|
|.  o +.oo. o . . |
|. . . = S.  +    |
| . . = ..    o   |
|    =  .  . .    |
|     o.    +     |
|       .ooo      |
+----[SHA256]-----+
[root@ansible ~]# for i in 10 11 12;do ssh-copy-id 192.168.1.$i;done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.1.10's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.1.10'"
and check to make sure that only the key(s) you wanted were added./usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.1.11's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.1.11'"
and check to make sure that only the key(s) you wanted were added./usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.1.12's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh '192.168.1.12'"
and check to make sure that only the key(s) you wanted were added.[root@ansible ~]# 

修改主机清单

[root@ansible ~]# vim /etc/ansible/hosts # This is the default ansible 'hosts' file.
#
# It should live in /etc/ansible/hosts
#
#   - Comments begin with the '#' character
#   - Blank lines are ignored
#   - Groups of hosts are delimited by [header] elements
#   - You can enter hostnames or ip addresses
#   - A hostname/ip can be a member of multiple groups# Ex 1: Ungrouped hosts, specify before any group headers.## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10# Ex 2: A collection of hosts belonging to the 'webservers' group## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110# If you have multiple hosts following a pattern you can specify
# them like this:## www[001:006].example.com# Ex 3: A collection of database servers in the 'dbservers' group## [dbservers]
## 
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57# Here's another example of host ranges, this time there are no
# leading 0s:## db-[99:101]-node.example.com
[web01]
nginx-01 ansible_ssh_host=192.168.1.10 ansible_ssh_port=22 ansible_ssh_user=root
nginx-02 ansible_ssh_host=192.168.1.11 ansible_ssh_port=22 ansible_ssh_user=root[web02]
apache ansible_ssh_host=192.168.1.12 ansible_ssh_port=22 ansible_ssh_user=root

安装服务

用源码安装nginx和apache有两种方法。
方法一:通过脚本安装
方法二:通过playbook安装
这里使用方法一
上传脚本

[root@ansible ~]# ls
anaconda-ks.cfg  ansible  auto_install_apache.sh  auto_install_nginx.sh

安装

同样的原理安装apache

[root@ansible ~]# ansible -i /etc/ansible/hosts web01 -m script -a "auto_install_nginx.sh"
nginx-02 | CHANGED => {"changed": true, "rc": 0, "stderr": "Shared connection to 192.168.1.11 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.1.11 closed."], "stdout": "正在安装依赖关系\r\n\u001b[1;32m依赖包安装成功\u001b[0m\r\n正在创建nginx使用的组和用户\r\n\u001b[1;32m创建用户和组成功\u001b[0m\r\n正在下载软件包\r\n\u001b[1;32m软件包下载成功\u001b[0m\r\n正在解压软件包\r\n\u001b[1;32m软件包解压成功\u001b[0m\r\n正在修改配置文件,隐藏版本号\r\n\u001b[1;32m隐藏版本号成功\u001b[0m\r\n正在预编译\r\n\u001b[1;32m预编译成功\u001b[0m\r\n正在编译及安装\r\n\u001b[1;32m编译及安装成功\u001b[0m\r\n正在添加环境变量\r\n\u001b[1;32m添加环境变量成功\u001b[0m\r\n正在启动nginx\r\n\u001b[1;32m启动成功\u001b[0m\r\n正在生成启动脚本\r\n\u001b[1;32m生成脚本成功\u001b[0m\r\n正在添加权限\r\n\u001b[1;32m添加权限成功\u001b[0m\r\n正在配置开机自启服务\r\n\u001b[1;32m配置开机自启成功\u001b[0m\r\n", "stdout_lines": ["正在安装依赖关系", "\u001b[1;32m依赖包安装成功\u001b[0m", "正在创建nginx使用的组和用户", "\u001b[1;32m创建用户和组成功\u001b[0m", "正在下载软件包", "\u001b[1;32m软件包下载成功\u001b[0m", "正在解压软件包", "\u001b[1;32m软件包解压成功\u001b[0m", "正在修改配置文件,隐藏版本号", "\u001b[1;32m隐藏版本号成功\u001b[0m", "正在预编译", "\u001b[1;32m预编译成功\u001b[0m", "正在编译及安装", "\u001b[1;32m编译及安装成功\u001b[0m", "正在添加环境变量", "\u001b[1;32m添加环境变量成功\u001b[0m", "正在启动nginx", "\u001b[1;32m启动成功\u001b[0m", "正在生成启动脚本", "\u001b[1;32m生成脚本成功\u001b[0m", "正在添加权限", "\u001b[1;32m添加权限成功\u001b[0m", "正在配置开机自启服务", "\u001b[1;32m配置开机自启成功\u001b[0m"]
}
nginx-01 | CHANGED => {"changed": true, "rc": 0, "stderr": "Shared connection to 192.168.1.10 closed.\r\n", "stderr_lines": ["Shared connection to 192.168.1.10 closed."], "stdout": "正在安装依赖关系\r\n\u001b[1;32m依赖包安装成功\u001b[0m\r\n正在创建nginx使用的组和用户\r\n\u001b[1;32m创建用户和组成功\u001b[0m\r\n正在下载软件包\r\n\u001b[1;32m软件包下载成功\u001b[0m\r\n正在解压软件包\r\n\u001b[1;32m软件包解压成功\u001b[0m\r\n正在修改配置文件,隐藏版本号\r\n\u001b[1;32m隐藏版本号成功\u001b[0m\r\n正在预编译\r\n\u001b[1;32m预编译成功\u001b[0m\r\n正在编译及安装\r\n\u001b[1;32m编译及安装成功\u001b[0m\r\n正在添加环境变量\r\n\u001b[1;32m添加环境变量成功\u001b[0m\r\n正在启动nginx\r\n\u001b[1;32m启动成功\u001b[0m\r\n正在生成启动脚本\r\n\u001b[1;32m生成脚本成功\u001b[0m\r\n正在添加权限\r\n\u001b[1;32m添加权限成功\u001b[0m\r\n正在配置开机自启服务\r\n\u001b[1;32m配置开机自启成功\u001b[0m\r\n", "stdout_lines": ["正在安装依赖关系", "\u001b[1;32m依赖包安装成功\u001b[0m", "正在创建nginx使用的组和用户", "\u001b[1;32m创建用户和组成功\u001b[0m", "正在下载软件包", "\u001b[1;32m软件包下载成功\u001b[0m", "正在解压软件包", "\u001b[1;32m软件包解压成功\u001b[0m", "正在修改配置文件,隐藏版本号", "\u001b[1;32m隐藏版本号成功\u001b[0m", "正在预编译", "\u001b[1;32m预编译成功\u001b[0m", "正在编译及安装", "\u001b[1;32m编译及安装成功\u001b[0m", "正在添加环境变量", "\u001b[1;32m添加环境变量成功\u001b[0m", "正在启动nginx", "\u001b[1;32m启动成功\u001b[0m", "正在生成启动脚本", "\u001b[1;32m生成脚本成功\u001b[0m", "正在添加权限", "\u001b[1;32m添加权限成功\u001b[0m", "正在配置开机自启服务", "\u001b[1;32m配置开机自启成功\u001b[0m"]
}
[root@ansible ~]# 

挂载ceph文件系统到web服务器

首先在所有的web服务器上创建目录

[root@nginx-01 ~]# mkdir -p /etc/ceph

编辑文件

这里需要把ceph文件系统的密钥写进去。(所有的web服务器都需要写入)

[root@nginx-01 ~]# vim /etc/ceph/admin.secretAQDgyk1i5gPqMRAAQa0dPgyF71C5Psiq/ebuQQ==

安装软件包

这里所有的web服务器都需要安装(需要网络源或者把ceph目录和ceph.repo发送到web上)

[root@nginx-01 ~]# yum -y install ceph-common-12.2.12

挂载

把ceph文件系统挂载到web存放网页文件的目录下

[root@nginx-01 ~]# mount -t ceph 192.168.1.6:6789:/ /usr/local/nginx/html/ -o name=admin,secretfile=/etc/ceph/admin.secret

查看一下

[root@nginx-01 ~]# df -hT
文件系统                类型      容量  已用  可用 已用% 挂载点
/dev/mapper/centos-root xfs        96G  1.5G   95G    2% /
devtmpfs                devtmpfs  476M     0  476M    0% /dev
tmpfs                   tmpfs     488M     0  488M    0% /dev/shm
tmpfs                   tmpfs     488M  7.7M  480M    2% /run
tmpfs                   tmpfs     488M     0  488M    0% /sys/fs/cgroup
/dev/sr0                iso9660   4.2G  4.2G     0  100% /mnt/cdrom
/dev/sda1               xfs       497M  123M  375M   25% /boot
tmpfs                   tmpfs      98M     0   98M    0% /run/user/0
192.168.1.6:6789:/      ceph      141G     0  141G    0% /usr/local/nginx/html

搭建LVS+keepalived

主机名IP
m_director192.168.1.14
s_director192.168.1.15

其他的就是nginx和apache服务
我们这里选择源码安装

安装依赖包

主从是同样的步骤安装keepalived

[root@m_director ~]# yum -y install gcc openssl-devel pcre-devel libnl-devel

上传软件包

链接:https://pan.baidu.com/s/12RMBlXfMnpxoOqbRVEDgzQ
提取码:dreq
–来自百度网盘超级会员V3的分享

[root@m_director ~]# ls
anaconda-ks.cfg  keepalived-2.0.18.tar.gz

解压软件包

[root@m_director ~]# tar -zxvf keepalived-2.0.18.tar.gz

预编译

[root@m_director ~]# cd keepalived-2.0.18
[root@m_director keepalived-2.0.18]# ./configure --prefix=/usr/local/keepalived

编译安装

[root@m_director keepalived-2.0.18]# make && make install

配置keepalived+LVS-DR模式

修改keepalived.conf配置文件

添加软连接

[root@m_director ~]# ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/

创建目录

[root@m_director ~]# mkdir /etc/keepalived

复制配置文件到刚才创建的目录

[root@m_director ~]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

修改配置文件

[root@m_director ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalivedglobal_defs {router_id m_director
}vrrp_instance lvs-dr {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.100}
}virtual_server 192.168.1.100 80 {delay_loop 6lb_algo rrlb_kind DR#persistence_timeout 50protocol TCPreal_server 192.168.1.10 80 {
weight 1TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 192.168.1.11 80 {weight 1TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}

! Configuration File for keepalived #!表示注释
global_defs { #全局定义部分
notification_email { #设置警报邮箱
acassen@firewall.loc #接收警报的邮箱地址,根据实际情况写
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc #设置发件人地址
smtp_server 192.168.200.1 #设置smtp server地址,即发邮件服务器
smtp_connect_timeout 30 #设置smtp超时连接时间,以上参数可以不配置
router_id m_director #表示运行keepalived服务器的一个标识,这个标识(router_id)是唯一的
}

vrrp_instance lvs-dr { #定义一个实例,一个集群就是一个实例。 默认VI_1 可以随意改
state MASTER #MASTER表示指定本节点为主节点,备用节点上设置为 BACKUP。注意节点状态均大写。
interface ens33 #绑定虚拟 IP 的网络接口
virtual_router_id 51 #虚拟路由ID标识,这个标识最好是一个数字,在一个keepalived.conf配置中是唯一的, MASTER和BACKUP配置中相同实例的virtual_router_id必须是一致的。
priority 100 #节点的优先级(1-255之间),越大越优先。备用节点必须比主节点优先级低。
advert_int 1 #为同步通知间隔。MASTER与BACKUP之间心跳检查的时间间隔,单位为秒,默认为1。
authentication { #设置验证信息,两个节点必须一致,同一vrrp实例的MASTER与BACKUP使用相同的密码才能正常通信。
auth_type PASS
auth_pass 1111
}
virtual_ipaddress { #指定虚拟 IP, 两个节点设置必须一样
172.16.16.172
}
}
#至此为止以上为实现高可用配置,如只需使用高可用功能下边配置可删除
#以下为虚拟服务器定义部分
#类似添加虚拟服务器 ipvsadm -A -t 172.16.16.172:80 -s rr
virtual_server 172.16.16.172 80 { #设置虚拟服务器,指定虚拟IP和端口
delay_loop 6 #健康检查时间为6秒,即Keepalived多长时间监测一次RS。
lb_algo rr #设置负载调度算法为rr算法
lb_kind DR #设置负载均衡模式,有NAT,TUN和DR三种模式可选
nat_mask 255.255.255.0 #非NAT模式注释掉此行 注释用!号
persistence_timeout 50 #连接保持时间,单位是秒。有了这个会话保持功能,用户的请求会被一直分发到某个服务节点,直到超过这个会话的保持时间。同一IP地址的客户端50秒内的请求都发到同个real server ,这个会影响LVS的 rr 调度算法,同一IP的客户端超过50 秒后,再次访问,才会被转发到另一台real server上。Persistence是持久性的意思
protocol TCP #指定转发协议类型,有TCP和UDP两种
real_server 172.16.16.177 80 { #配置RS节点1,需要指定 realserver 的真实 IP 地址和端口,IP和端口之间用空格隔开
weight 1 ##权重,权重大小用数字表示,数字越大,权重越高
TCP_CHECK { #节点健康检查。这段内容要手动添加,把原来的内容删除
connect_timeout 3 #超时时间,表示3秒无响应超时。
nb_get_retry 3 #表示重试次数
delay_before_retry 3 #表示重试间隔
connect_port 80 #检测端口,利用80端口检查
}
}
real_server 172.16.16.178 80 { #RS节点2
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
#默认配置文件中还有两个 virtual_server 模版,把剩下的都删除了,就可以。 如:
#virtual_server 10.10.10.2 1358 { 。。。 }
#virtual_server 10.10.10.3 1358 { 。。。 }

重启keepalived并设置开机自启

[root@m_director ~]# systemctl restart keepalived
[root@m_director ~]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

查看一下

[root@m_director ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since 三 2022-04-13 19:25:11 CST; 1min 4s agoMain PID: 17708 (keepalived)CGroup: /system.slice/keepalived.service├─17708 /usr/local/keepalived/sbin/keepalived -D├─17709 /usr/local/keepalived/sbin/keepalived -D└─17710 /usr/local/keepalived/sbin/keepalived -D413 19:25:14 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:14 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:14 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:14 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:19 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:19 m_director Keepalived_vrrp[17710]: (lvs-dr) Sending/queueing gratuitous ARPs on ens33 for 192.168.1.100
413 19:25:19 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:19 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:19 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100
413 19:25:19 m_director Keepalived_vrrp[17710]: Sending gratuitous ARP on ens33 for 192.168.1.100

备用节点s_director配置

首先创建好目录,并把之前设置好的配置文件拉取到从上,在进行修改。

[root@s_director ~]# mkdir /etc/keepalived
[root@s_director ~]# scp root@192.168.1.14:/etc/keepalived/keepalived.conf /etc/keepalived/
root@192.168.1.14's password: 
keepalived.conf                                                                               100%  846   835.3KB/s   00:00    
[root@s_director ~]# 

修改配置文件

[root@s_director ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived
global_defs {router_id s_director
}vrrp_instance lvs-dr {state BACKUPinterface ens33virtual_router_id 51priority 99advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.1.100}
virtual_server 192.168.1.100 80 {delay_loop 6lb_algo rrlb_kind DR#persistence_timeout 50protocol TCPreal_server 192.168.1.10 80 {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 192.168.1.11 80 {weight 1TCP_CHECK {connect_timeout 3nb_get_retry 3delay_before_retry 3connect_port 80}}
}

重启并设置开机自启

[root@s_director ~]# systemctl restart keepalived
[root@s_director ~]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

查看一下

[root@s_director ~]# systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since 三 2022-04-13 19:31:40 CST; 55s agoMain PID: 7270 (keepalived)CGroup: /system.slice/keepalived.service├─7270 /usr/local/keepalived/sbin/keepalived -D└─7271 /usr/local/keepalived/sbin/keepalived -D413 19:31:40 s_director Keepalived_vrrp[7271]: (Line 39) Unknown keyword 'connect_port'
413 19:31:40 s_director Keepalived_vrrp[7271]: (Line 40) Unknown keyword '}'
413 19:31:40 s_director Keepalived_vrrp[7271]: (Line 41) Unknown keyword '}'
413 19:31:40 s_director Keepalived_vrrp[7271]: (Line 42) Unknown keyword '}'
413 19:31:40 s_director Keepalived_vrrp[7271]: Assigned address 192.168.1.15 for interface ens33
413 19:31:40 s_director Keepalived_vrrp[7271]: Assigned address fe80::d59:d76b:bd8:5cf3 for interface ens33
413 19:31:40 s_director Keepalived_vrrp[7271]: Registering gratuitous ARP shared channel
413 19:31:40 s_director Keepalived_vrrp[7271]: (lvs-dr) removing VIPs.
413 19:31:40 s_director Keepalived_vrrp[7271]: (lvs-dr) Entering BACKUP STATE (init)
413 19:31:40 s_director Keepalived_vrrp[7271]: VRRP sockpool: [ifindex(2), family(IPv4), proto(112), unicast(0), fd(11,12)]
Hint: Some lines were ellipsized, use -l to show in full.

测试主从切换

首先查看一下主和从的vip情况

[root@m_director ~]# ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:4d:1b:8f brd ff:ff:ff:ff:ff:ffinet 192.168.1.14/24 brd 192.168.1.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.1.100/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::ce35:109e:58ef:9ad4/64 scope link noprefixroute valid_lft forever preferred_lft forever
[root@s_director ~]# ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:56:bd:a2 brd ff:ff:ff:ff:ff:ffinet 192.168.1.15/24 brd 192.168.1.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::d59:d76b:bd8:5cf3/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::ce35:109e:58ef:9ad4/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever

先停止主,在查看从的vip

[root@m_director ~]# systemctl stop keepalived
[root@s_director ~]# ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:56:bd:a2 brd ff:ff:ff:ff:ff:ffinet 192.168.1.15/24 brd 192.168.1.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet 192.168.1.100/32 scope global ens33valid_lft forever preferred_lft foreverinet6 fe80::d59:d76b:bd8:5cf3/64 scope link noprefixroute valid_lft forever preferred_lft foreverinet6 fe80::ce35:109e:58ef:9ad4/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever

此时显示vip漂移成功

修改nginx服务(同样的步骤在nginx-01和nginx-02上都要操作)

关闭ARP转发

[root@nginx-01 ~]# vim /etc/sysctl.conf # sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.conf.ens33.arp_ignore = 1
net.ipv4.conf.ens33.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

生效配置文件

[root@nginx-01 ~]# sysctl -p
net.ipv4.conf.ens33.arp_ignore = 1
net.ipv4.conf.ens33.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

配置nginx的vip

[root@nginx-01 ~]# cp /etc/sysconfig/network-scripts/ifcfg-lo /etc/sysconfig/network-scripts/ifcfg-lo:0
[root@nginx-01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-lo:0DEVICE=lo:0
IPADDR=192.168.1.100
NETMASK=255.255.255.255
ONBOOT=yes
NAME=loopback

重启网卡并查看vip

[root@nginx-01 ~]# systemctl restart network
[root@nginx-01 ~]# ifconfig 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500inet 192.168.1.10  netmask 255.255.255.0  broadcast 192.168.1.255inet6 fe80::d59:d76b:bd8:5cf3  prefixlen 64  scopeid 0x20<link>inet6 fe80::3071:e68b:a65d:2d95  prefixlen 64  scopeid 0x20<link>inet6 fe80::ce35:109e:58ef:9ad4  prefixlen 64  scopeid 0x20<link>ether 00:0c:29:28:c6:ce  txqueuelen 1000  (Ethernet)RX packets 1173  bytes 91510 (89.3 KiB)RX errors 0  dropped 0  overruns 0  frame 0TX packets 674  bytes 80567 (78.6 KiB)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 127.0.0.1  netmask 255.0.0.0inet6 ::1  prefixlen 128  scopeid 0x10<host>loop  txqueuelen 1000  (Local Loopback)RX packets 2  bytes 104 (104.0 B)RX errors 0  dropped 0  overruns 0  frame 0TX packets 2  bytes 104 (104.0 B)TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536inet 192.168.1.100  netmask 255.255.255.255loop  txqueuelen 1000  (Local Loopback)

修改主页文件(做测试用)

两台nginx都需要修改

[root@nginx-01 ~]# cd /usr/local/nginx/html/
[root@nginx-01 html]# echo "192.168.1.10" > index.html

安装ipvsadm命令,并添加规则

[root@m_director ~]# yum -y install ipvsadm
[root@m_director ~]# ipvsadm -A -t 192.168.1.100:80 -s rr

添加服务器节点

[root@m_director ~]# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.10:80 -g -w 1
[root@m_director ~]# ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.11:80 -g -w 1

重启服务

[root@m_director ~]# systemctl restart keepalived

查看一下

[root@m_director ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.100:80 rr-> 192.168.1.10:80              Route   1      0          0         -> 192.168.1.11:80              Route   1      0          0 

访问测试

[root@apache ~]# curl 192.168.1.100
192.168.1.11
[root@apache ~]# curl 192.168.1.100
192.168.1.10

显示已经轮询
(此时LVS搭建完毕)

搭建discuz论坛

我们在nginx-01上搭建discuz论坛

上传软件包

链接:https://pan.baidu.com/s/11qYfZBMNok9g7LFvO4jLdA
提取码:4ev7
–来自百度网盘超级会员V3的分享

[root@nginx-01 ~]# ls
anaconda-ks.cfg  ceph  libmcrypt-2.5.7.tar.gz  nginx-1.10.3.tar.gz  php-5.6.36.tar.gz
[root@nginx-01 ~]# 

我们需要上传php-5.6.36.tar.gz和libmcrypt-2.5.7.tar.gz这两个软件包

解决依赖关系

如果这里报错,我们可以配置一个网络源

[root@nginx-01 ~]# yum -y install gcc autoconf  freetype gd libpng libpng-devel libjpeg libxml2 libxml2-devel zlib curl curl-devel freetype-devel libjpeg-devel bzip2 bzip2-devel openssl openssl-devel

安装 libmcrypt

[root@nginx-01 ~]# tar -zxvf libmcrypt-2.5.7.tar.gz
[root@nginx-01 ~]# cd libmcrypt-2.5.7
[root@nginx-01 libmcrypt-2.5.7]# ./configure --prefix=/usr/local/libmcrypt && make && make install

解压php包

[root@nginx-01 ~]# tar -zxvf php-5.6.36.tar.gz -C /usr/local/src/

安装php

这里需要预编译一下

[root@nginx-01 ~]# cd /usr/local/src/php-5.6.36/
[root@nginx-01 php-5.6.36]# ./configure --prefix=/usr/local/php5.6 --with-mysql=mysqlnd --with-pdo-mysql=mysqlnd --with-mysqli=mysqlnd --with-openssl --enable-fpm --enable-sockets --enable-sysvshm --enable-mbstring --with-freetype-dir --with-jpeg-dir --with-png-dir --with-zlib --with-libxml-dir=/usr --enable-xml --with-mhash --with-mcrypt=/usr/local/libmcrypt --with-config-file-path=/etc --with-config-file-scan-dir=/usr/local/php5.6/etc/ --with-bz2 --enable-maintainer-zts

编译及安装

[root@nginx-01 php-5.6.36]# make && make install

生成php.ini脚本

[root@nginx-01 php-5.6.36]# cp /usr/local/src/php-5.6.36/php.ini-production  /usr/local/php5.6/etc/php.ini

修改fpm配置php-fpm.conf.default文件名称

[root@nginx-01 php-5.6.36]# cd /usr/local/php5.6/etc/
[root@nginx-01 etc]# cp php-fpm.conf.default php-fpm.conf

修改配置文件

需要修改以下内容

[root@nginx-01 etc]# vim php-fpm.confuser = www
group = www
pid = run/php-fpm.pid
listen = 0.0.0.0:9000
pm.max_children =300
pm.start_servers =20
pm.min_spare_servers = 20
pm.max_spare_servers = 100

复制启动脚本到init.d下

[root@nginx-01 etc]# cp /usr/local/src/php-5.6.36/sapi/fpm/init.d.php-fpm /etc/init.d/php-fpm

赋予执行权限

[root@nginx-01 etc]# chmod +x /etc/init.d/php-fpm 

添加开机自启

[root@nginx-01 etc]# chkconfig --add php-fpm
[root@nginx-01 etc]# chkconfig php-fpm on

启动服务

[root@nginx-01 etc]# /etc/init.d/php-fpm  start
Starting php-fpm  done

查看端口监听状态

[root@nginx-01 etc]# ss -anput | grep php
tcp    LISTEN     0      128       *:9000                  *:*                   users:(("php-fpm",pid=5261,fd=0),("php-fpm",pid=5260,fd=0),("php-fpm",pid=5259,fd=0),("php-fpm",pid=5258,fd=0),("php-fpm",pid=5257,fd=0),("php-fpm",pid=5256,fd=0),("php-fpm",pid=5255,fd=0),("php-fpm",pid=5254,fd=0),("php-fpm",pid=5253,fd=0),("php-fpm",pid=5252,fd=0),("php-fpm",pid=5251,fd=0),("php-fpm",pid=5250,fd=0),("php-fpm",pid=5249,fd=0),("php-fpm",pid=5248,fd=0),("php-fpm",pid=5247,fd=0),("php-fpm",pid=5246,fd=0),("php-fpm",pid=5245,fd=0),("php-fpm",pid=5244,fd=0),("php-fpm",pid=5243,fd=0),("php-fpm",pid=5242,fd=0),("php-fpm",pid=5241,fd=7))

修改nginx.conf配置文件

[root@nginx-01 ~]# vim /usr/local/nginx/conf/nginx.confuser www www;
worker_processes  1;
error_log  logs/error.log;
#error_log  logs/error.log  notice;
##error_log  logs/error.log  info;
pid        logs/nginx.pid;
events {
use epoll;worker_connections  65535;multi_accept on;
}
http {
include       mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  logs/access.log  main;sendfile        on;tcp_nopush     on;keepalive_timeout  65;tcp_nodelay on;client_header_buffer_size 4k;open_file_cache max=102400 inactive=20s;open_file_cache_valid 30s;open_file_cache_min_uses 1;client_header_timeout 15;client_body_timeout 15;reset_timedout_connection on;send_timeout 15;fastcgi_connect_timeout     600;fastcgi_send_timeout 600;fastcgi_read_timeout 600;fastcgi_buffer_size 64k;fastcgi_buffers     4 64k;fastcgi_busy_buffers_size 128k;fastcgi_temp_file_write_size 128k;fastcgi_temp_path /usr/local/nginx/nginx_tmp;fastcgi_intercept_errors on;fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=ngx_fcgi_cache:128m inactive=1d max_size=10g;gzip on;gzip_min_length  2k;gzip_buffers     4 32k;gzip_http_version 1.1;gzip_comp_level 6;gzip_vary on;gzip_proxied any;
server {listen       80;server_name  www.benet.com;charset utf-8;access_log  logs/host.access.log  main;location ~* ^.+\.(jpg|gif|png|swf|flv|wma|wmv|asf|mp3|mmf|zip|rar)$ {valid_referers none blocked  www.benet.com benet.com;if ($invalid_referer) {#return 302  http://www.benet.com/img/nolink.jpg;return 404;break;}expires 365d;access_log off;}location / {root   html;index  index.php index.html index.htm;}location ~* \.(ico|jpe?g|gif|png|bmp|swf|flv)$ {expires 30d;#log_not_found off;access_log off;}location ~* \.(js|css)$ {expires 7d;log_not_found off;access_log off;}location = /(favicon.ico|roboots.txt) {access_log off;log_not_found off;}location /status {stub_status on;}location ~ .*\.(php|php5)?$ {root html;fastcgi_pass 127.0.0.1:9000;fastcgi_index index.php;include fastcgi.conf;fastcgi_cache ngx_fcgi_cache;fastcgi_cache_valid 200 302 1h;fastcgi_cache_valid 301 1d;fastcgi_cache_valid any 1m;fastcgi_cache_min_uses 1;fastcgi_cache_use_stale error timeout invalid_header http_500;fastcgi_cache_key http://$host$request_uri;}#error_page  404              /404.html;# redirect server error pages to the static page /50x.html#error_page   500 502 503 504  /50x.html;location = /50x.html {root   html;}}
}

生效配置文件

[root@nginx-01 ~]# nginx -s reload

创建index.php和test.php文件

[root@nginx-01 ~]# vim /usr/local/nginx/html/index.php <?phpphpinfo();
?>
[root@nginx-01 ~]# vim /usr/local/nginx/html/test.php <?php
$link=mysql_connect('192.168.1.200','manager','1');
if ($link)echo "connection success......";
mysql_close();
?>

这里的测试mysql的IP是之前设置好的虚拟ip

测试

在这里插入图片描述
此时可以看见是支持php的
在这里插入图片描述
此时显示是成功的

修改默认运行账户

[root@nginx-01 ~]# vim /usr/local/php5.6/etc/php-fpm.confuser = nginx
group = nginx

下载软件包

[root@nginx-01 ~]# mkdir /usr/local/software
[root@nginx-01 ~]#  cd !$cd /usr/local/software
[root@nginx-01 software]# wget http://download.comsenz.com/DiscuzX/3.3/Discuz_X3.3_SC_UTF8.zip
--2022-04-19 13:38:15--  http://download.comsenz.com/DiscuzX/3.3/Discuz_X3.3_SC_UTF8.zip
正在解析主机 download.comsenz.com (download.comsenz.com)... 220.194.79.18, 111.166.22.241, 218.11.11.205, ...
正在连接 download.comsenz.com (download.comsenz.com)|220.194.79.18|:80... 已连接。
已发出 HTTP 请求,正在等待回应... 301 Moved Permanently
位置:https://download.comsenz.com/DiscuzX/3.3/Discuz_X3.3_SC_UTF8.zip [跟随至新的 URL]
--2022-04-19 13:38:16--  https://download.comsenz.com/DiscuzX/3.3/Discuz_X3.3_SC_UTF8.zip
正在连接 download.comsenz.com (download.comsenz.com)|220.194.79.18|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:10922155 (10M) [application/zip]
正在保存至: “Discuz_X3.3_SC_UTF8.zip”100%[======================================================================================>] 10,922,155   678KB/s 用时 17s    2022-04-19 13:38:33 (646 KB/s) - 已保存 “Discuz_X3.3_SC_UTF8.zip” [10922155/10922155])

创建站点目录

[root@nginx-01 software]# mkdir -p /usr/local/nginx/html/bbs

解压软件包

[root@nginx-01 software]# unzip Discuz_X3.3_SC_UTF8.zip -d /usr/local/nginx/html/bbs/

建立虚拟主机

[root@nginx-01 software]# cd /usr/local/nginx/
[root@nginx-01 nginx]# mkdir -p conf/vhost
[root@nginx-01 nginx]# vim conf/vhost/bbs.discuz.com.conflog_format  bbs  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';server {listen       80;#autoindex on;server_name  bbs.discuz.com;access_log  logs/bbs.access.log  bbs;location / {root   html/bbs/upload;index  index.php index.html;}error_page   500 502 503 504  /50x.html;location = /50x.html {root   html;}# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000location ~ \.php$ {root           html/bbs/upload;fastcgi_pass   127.0.0.1:9000;fastcgi_index  index.php;fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;include        fastcgi_params;}}
[root@nginx-01 nginx]# vim conf/nginx.conf#在http下面添加
http {include       mime.types;default_type  application/octet-stream;include       vhost/bbs.discuz.com.conf; #添加这一行

添加权限

[root@nginx-01 nginx]# chown -R nginx. /usr/local/nginx/html/bbs/

重启nginx

[root@nginx-01 nginx]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@nginx-01 nginx]# nginx -s reload

创建数据库

[root@mysql-01 ~]# mysql -uroot -p1
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.7.26-log Source distributionCopyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> 
mysql> create database discuz charset utf8;
Query OK, 1 row affected (0.00 sec)mysql> grant all on discuz.* to discuz@'192.168.1.%' identified by '1';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

开始访问并安装

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
此时discuz论坛搭建完毕。

安装zabbix(在nginx02上搭建)

首先需要上传软件包

[root@nginx-02 ~]# ls
anaconda-ks.cfg  ceph  libmcrypt-2.5.7.tar.gz  nginx-1.10.3.tar.gz  php-5.6.36.tar.gz  zabbixDependence.tar.gz  zabbix-4.2.6.tar.gz

我们上传php-5.6.36.tar.gz ;zabbixDependence.tar.gz;libmcrypt-2.5.7.tar.gz;zabbix-4.2.6.tar.gz这四个软件包

解压软件包并配置zabbix源

[root@nginx-02 ~]# tar -zxvf php-5.6.36.tar.gz
[root@nginx-02 ~]# tar -zxvf zabbixDependence.tar.gz
[root@nginx-02 ~]# tar -zxvf libmcrypt-2.5.7.tar.gz
[root@nginx-02 ~]# tar -zxvf zabbix-4.2.6.tar.gz
[root@nginx-02 ~]# cp /etc/yum.repos.d/centos7.repo /etc/yum.repos.d/zabbix.repo
[root@nginx-02 ~]# vim /etc/yum.repos.d/zabbix.repo [zabbix]
name=zabbix
baseurl=file:///root/zabbixDependence
enabled=1
gpgcheck=0

解决依赖关系

[root@nginx-02 ~]# yum -y install make apr* autoconf automake curl-devel gcc gcc-c++  openssl openssl-devel gd kernel keyutils patch perl kernel-headers compat* mpfr cpp glibc libgomp libstdc++-devel keyutils-libs-devel libcom_err-devel libsepol-devel libselinux-devel krb5-devel zlib-devel libXpm* freetype libjpeg* libpng*  libtool* libxml2 libxml2-devel patch libcurl-devel bzip2-devel freetype-devel

安装libmcrypt

[root@nginx-02 ~]# cd libmcrypt-2.5.7
[root@nginx-02 libmcrypt-2.5.7]# ./configure --prefix=/usr/local/libmcrypt && make && make install

安装php

首先需要预编译

[root@nginx-02 libmcrypt-2.5.7]# cd /root/php-5.6.36
[root@nginx-02 php-5.6.36]# ./configure --prefix=/usr/local/php5.6 --with-config-file-path=/etc  --with-mysql=mysqlnd --with-mysqli=mysqlnd --with-mysql-sock=mysqlnd --with-gd --with-iconv --with-libxml-dir=/usr --with-mhash --with-mcrypt --with-config-file-scan-dir=/etc/php.d --with-bz2 --with-zlib --with-freetype-dir --with-png-dir --with-jpeg-dir --enable-xml --enable-bcmath --enable-shmop --enable-sysvsem --enable-inline-optimization --enable-mbregex --enable-fpm --enable-mbstring --enable-ftp --enable-gd-native-ttf --with-openssl --enable-pcntl --enable-sockets --with-xmlrpc --enable-zip --enable-soap --without-pear --with-gettext --enable-session --with-mcrypt=/usr/local/libmcrypt --with-curl

编译及安装

[root@nginx-02 php-5.6.36]# make && make install

修改配置文件

[root@nginx-02 php-5.6.36]# cp php.ini-production /etc/php.ini
[root@nginx-02 php-5.6.36]# vim /etc/php.ini 找到:
;date.timezone = 
修改为:
date.timezone = PRC #设置时区找到:
expose_php = On 
修改为:
expose_php = Off #禁止显示php版本的信息找到:
short_open_tag = Off 
修改为:short_open_tag = On //支持php短标签找到:post_max_size = 8M修改为:post_max_size = 16M  //上传文件大小找到:max_execution_time = 30修改为:max_execution_time = 300  //php脚本最大执行时间找到:max_input_time = 60修改为:max_input_time = 300  //以秒为单位对通过POST、GET以及PUT方式接收数据时间进行限制always_populate_raw_post_data = -1mbstring.func_overload = 0

创建php-fpm服务启动脚本

[root@nginx-02 php-5.6.36]# 
[root@nginx-02 php-5.6.36]# cp sapi/fpm/init.d.php-fpm /etc/init.d/php-fpm
[root@nginx-02 php-5.6.36]# chmod +x /etc/init.d/php-fpm 
[root@nginx-02 php-5.6.36]# chkconfig --add php-fpm
[root@nginx-02 php-5.6.36]# chkconfig php-fpm on

修改配置文件

[root@nginx-02 php-5.6.36]# cp /usr/local/php5.6/etc/php-fpm.conf.default /usr/local/php5.6/etc/php-fpm.conf
[root@nginx-02 php-5.6.36]# vim /usr/local/php5.6/etc/php-fpm.conf修改内容如下:pid = run/php-fpm.piduser = wwwgroup = wwwlisten =127.0.0.1:9000pm.max_children = 300pm.start_servers = 10pm.min_spare_servers = 10pm.max_spare_servers =50

启动php-fpm服务

[root@nginx-02 php-5.6.36]# /etc/init.d/php-fpm start
Starting php-fpm  done
[root@nginx-02 php-5.6.36]# netstat -anput | grep php
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      81436/php-fpm: mast

修改nginx配置文件支持php

[root@nginx-02 ~]# vim /usr/local/nginx/conf/nginx.confuser  www www;
worker_processes  1;
error_log  logs/error.log;
pid        logs/nginx.pid;events {use epoll;worker_connections  1024;
}http {include       mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  logs/access.log  main;sendfile        on;keepalive_timeout  65;server {listen       80;server_name  localhost;charset utf-8;location / {root   html;index  index.php index.html index.htm;}location ~ \.php$ {root  html;fastcgi_pass 127.0.0.1:9000;fastcgi_index index.php;include fastcgi.conf;}error_page   500 502 503 504  /50x.html;location = /50x.html {root   html;}}
}

重载配置文件

[root@nginx-02 ~]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@nginx-02 ~]# nginx -s reload

创建测试页

[root@nginx-02 ~]# vim /usr/local/nginx/html/test1.php<?php
phpinfo();
?>
[root@nginx-02 ~]# vim /usr/local/nginx/html/test2.php<?php
$link=mysql_connect('192.168.1.200','manager','1');
if($link) echo "ok";
mysql_close();
?>

测试

在这里插入图片描述
在这里插入图片描述

创建zabbix使用的数据库

[root@mysql-01 ~]# mysql -uroot -p1
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 16
Server version: 5.7.26-log Source distributionCopyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> 
mysql> create database zabbix character set utf8;
Query OK, 1 row affected (0.00 sec)mysql> grant all on zabbix.* to zabbix@'192.168.1.%' identified by '1';
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)

导入数据库

注意顺序,顺序错了会报错

[root@nginx-02 zabbix-4.2.6]# mysql -uzabbix -p1 -h 192.168.1.200 zabbix < database/mysql/schema.sql
[root@nginx-02 zabbix-4.2.6]# mysql -uzabbix -p1 -h 192.168.1.200 zabbix < database/mysql/images.sql 
[root@nginx-02 zabbix-4.2.6]# mysql -uzabbix -p1 -h 192.168.1.200 zabbix < database/mysql/data.sql

解决依赖关系

[root@nginx-02 zabbix-4.2.6]# yum -y install net-snmp net-snmp-devel curl-devel java-1.8.0-openjdk java-1.8.0-openjdk-devel  OpenIPMI-devel  libssh2-devel libevent libevent-devel mariadb-devel

创建zabbix用户

[root@nginx-02 ~]# groupadd zabbix
[root@nginx-02 ~]# useradd -s /sbin/nologin -g zabbix zabbix

预编译

[root@nginx-02 ~]# cd zabbix-4.2.6
[root@nginx-02 zabbix-4.2.6]#  ./configure --prefix=/usr/local/zabbix --enable-server --enable-agent --enable-java --with-mysql=mysqlnd --with-net-snmp --with-libcurl --with-libxml2 --with-openipmi

安装

根据上面的命令直接安装即可

[root@nginx-02 zabbix-4.2.6]# make install

添加软连接

[root@nginx-02 zabbix-4.2.6]# ln -s /usr/local/zabbix/bin/* /usr/local/bin/

配置zabbix_server.conf

[root@nginx-02 ~]# vim /usr/local/zabbix/etc/zabbix_server.conf#修改以下内容
LogFile=/usr/local/zabbix/logs/zabbix_server.log
PidFile=/usr/local/zabbix/logs/zabbix_server.pid
DBHost=192.168.1.200
DBName=zabbix
DBUser=zabbix
DBPassword=1
DBPort=3306
[root@nginx-02 ~]# mkdir -p /usr/local/zabbix/logs
[root@nginx-02 ~]# chown -R zabbix: /usr/local/zabbix/

配置zabbix监控本身

[root@nginx-02 ~]# vim /usr/local/zabbix/etc/zabbix_agentd.conf#修改以下内容PidFile=/usr/local/zabbix/logs/zabbix_agentd.pidLogFile=/usr/local/zabbix/logs/zabbix_agentd.logServer=127.0.0.1ListenPort=10050ServerActive=127.0.0.1Hostname=nginx-02    #注意这里是要监控的主机名(必须一样)Timeout=15Include=/usr/local/zabbix/etc/zabbix_agentd.conf.d/UnsafeUserParameters=1

启动

[root@nginx-02 ~]# /usr/local/zabbix/sbin/zabbix_server -c /usr/local/zabbix/etc/zabbix_server.conf
[root@nginx-02 ~]# netstat -anput | grep zabbix
tcp        0      0 0.0.0.0:10051           0.0.0.0:*               LISTEN      102359/zabbix_serve

添加zabbix启动脚本

[root@nginx-02 ~]# cd zabbix-4.2.6/misc/init.d/
[root@nginx-02 init.d]# cp fedora/core/* /etc/init.d/
[root@nginx-02 init.d]# vim /etc/init.d/zabbix_server #修改这两个文件(修改的内容相同)
[root@nginx-02 init.d]# vim /etc/init.d/zabbix_agentdBASEDIR=/usr/local/zabbix  #找到此行,并修改。zabbix安装目录
PIDFILE=/usr/local/zabbix/logs/$BINARY_NAME.pid   # pid文件路径
[root@nginx-02 init.d]# chkconfig --add zabbix_server
[root@nginx-02 init.d]# chkconfig --add zabbix_agentd
[root@nginx-02 init.d]# chkconfig zabbix_server on
[root@nginx-02 init.d]# chkconfig zabbix_agentd on

配置zabbix的web界面

注:/usr/local/nginx/html为Nginx默认站点目录 ,www为Nginx运行账户
注:PHP需要至少开启扩展: gd,bcmath,ctype,libXML,xmlreader,xmlwriter,session,sockets,mbstring,gettext,mysql

[root@nginx-02 init.d]# cd /root/zabbix-4.2.6
[root@nginx-02 zabbix-4.2.6]# cp -r frontends/php/* /usr/local/nginx/html/
[root@nginx-02 zabbix-4.2.6]# chown -R www: /usr/local/nginx/html/
[root@nginx-02 zabbix-4.2.6]# /usr/local/php5.6/bin/php -m
[PHP Modules]
bcmath
bz2
Core
ctype
curl
date
dom
ereg
fileinfo
filter
ftp
gd
gettext
hash
iconv
json
libxml
mbstring
mcrypt
mhash
mysql
mysqli
mysqlnd
openssl
pcntl
pcre
PDO
pdo_sqlite
Phar
posix
Reflection
session
shmop
SimpleXML
soap
sockets
SPL
sqlite3
standard
sysvsem
tokenizer
xml
xmlreader
xmlrpc
xmlwriter
zip
zlib[Zend Modules]

启动zabbix_agnetd

[root@nginx-02 zabbix-4.2.6]# systemctl start zabbix_agentd
[root@nginx-02 zabbix-4.2.6]# netstat -anput | grep zabbix
tcp        0      0 0.0.0.0:10050           0.0.0.0:*               LISTEN      104081/zabbix_agent

配置web页面

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

修改为中文界面

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

解决中文乱码问题

从windows的控制面板->字体->选择一种中文字库例如“楷体”

把它拷贝到web端的fonts目录下:/usr/local/nginx/html/assets/fonts/,并确保后缀名为ttf
并且将之前的字体文件DejaVuSans.ttf移动到别处

[root@nginx-02 ~]# mv simkai.ttf /usr/local/nginx/html/assets/fonts/
[root@nginx-02 ~]# ls /usr/local/nginx/html/assets/fonts/
DejaVuSans.ttf  simkai.ttf
[root@nginx-02 ~]# mv /usr/local/nginx/html/assets/fonts/DejaVuSans.ttf .
[root@nginx-02 ~]# vim /usr/local/nginx/html/include/defines.inc.php
#将里面关于字体设置从DejaVuSans替换成simkai

此时zabbix主机配置完成。

搭建DNS服务

bind //该包为DNS 服务的主程序包。
bind-chroot // 提高安全性。

[root@dns ~]#  yum -y install bind bind-chroot bind-utils

启动named并设置开机自启

[root@dns ~]# systemctl start named
[root@dns ~]# systemctl enable named
Created symlink from /etc/systemd/system/multi-user.target.wants/named.service to /usr/lib/systemd/system/named.service.

查看端口

[root@dns ~]# netstat -anput | grep 53
tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      1261/named          
tcp        0      0 127.0.0.1:953           0.0.0.0:*               LISTEN      1261/named          
tcp6       0      0 ::1:53                  :::*                    LISTEN      1261/named          
tcp6       0      0 ::1:953                 :::*                    LISTEN      1261/named          
udp        0      0 127.0.0.1:53            0.0.0.0:*                           1261/named          
udp6       0      0 ::1:53                  :::*                                1261/named

修改配置文件

首先需要备份配置文件

[root@dns ~]# cp /etc/named.conf /etc/named.conf.back
[root@dns ~]# vim /etc/named.conf//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
// See the BIND Administrator's Reference Manual (ARM) for details about the
// configuration located in /usr/share/doc/bind-{version}/Bv9ARM.htmloptions {listen-on port 53 { any; };listen-on-v6 port 53 { any; };directory       "/var/named";allow-query     { any; };recursion       yes;forwarders      { 8.8.8.8; 114.114.114.114; };
};zone "." IN {type hint;file "named.ca";
};zone "test" IN {type master;file "test.zone";allow-transfer { 192.168.1.17; };
};zone "1.168.192.in-addr.arpa" IN {type master;file "192.168.1.arpa";allow-transfer { 192.168.1.17; };
};

检查一下

检查一下看看有没有语法错误

[root@dns ~]# named-checkconf /etc/named.conf

编辑正向解析配置文件

[root@dns ~]# cp /var/named/named.empty /var/named/test.zone
[root@dns ~]# vim /var/named/test.zone$TTL 1D
@       IN SOA @ root.test. (0       ; serial1D      ; refresh1H      ; retry1W      ; expire3H )    ; minimum
test.   IN      NS      dns-server01.test.
test.   IN      NS      dns-server02.test.dns-server01.test.      IN      A       192.168.1.16
dns-server02.test.      IN      A       192.168.1.17
web.nginx01.test.       IN      A       192.168.1.10
discuz.tset.            IN      A       192.168.1.10
web.nginx02.test.       IN      A       192.168.1.11
zabbix.test.            IN      A       192.168.1.11*                       IN      A       192.168.1.100
[root@dns ~]# vim /var/named/192.168.1.arpa$TTL 1D
@       IN SOA @ root.test. (0       ; serial1D      ; refresh1H      ; retry1W      ; expire3H )    ; minimumIN      NS      dns-server01.test.IN      NS      dns-server02.test.16              IN      PTR     dns-server01.test.
17              IN      PTR     dns-server02.test.
10              IN      PTR     web.nginx01.test.
10              IN      PTR     discuz.test.
11              IN      PTR     web.nginx02.test.
11              IN      PTR     zabbix.test.

检查正向解析和反向解析配置文件

[root@dns ~]# named-checkzone test /var/named//test.zone 
/var/named//test.zone:14: ignoring out-of-zone data (discuz.tset)
zone test/IN: loaded serial 0
OK
[root@dns ~]# named-checkzone 1.168.192.in-addr.arpa /var/named/192.168.1.arpa 
/var/named/192.168.1.arpa:18: warning: *.1.168.192.in-addr.arpa: bad name (check-names)
zone 1.168.192.in-addr.arpa/IN: loaded serial 0
OK
[root@dns ~]# named-checkconf -z /etc/named.conf
zone test/IN: loaded serial 0
zone 1.168.192.in-addr.arpa/IN: loaded serial 0

修改属组

[root@dns ~]# chown root:named /var/named/test.zone 
[root@dns ~]# chown root:named /var/named/192.168.1.arpa 

测试

在测试之前需要修改原本的dns为我们搭建的dns服务器地址

[root@lixiaochen16 ~]# 
[root@lixiaochen16 ~]# nslookup web.nginx01.test
Server:		192.168.1.16
Address:	192.168.1.16#53Name:	web.nginx01.test
Address: 192.168.1.10[root@lixiaochen16 ~]#

显示成功


http://chatgpt.dhexx.cn/article/OnmLTOVC.shtml

相关文章

Docker 部署 web 项目

本篇文章主要介绍将一个完整的 Web 项目&#xff08;包括数据库、后端、前端&#xff09;部署到 Docker 中的详细步骤 项目是前后端分离的&#xff0c;后端使用 SpringBoot JKD17 MySQL 8&#xff1b;前端使用 Vue webpack。服务器是跑在 WSL2 上的 Ubuntu 20.04.5 &#x1…

阿里巴巴 JAVA 开发手册

阿里巴巴 JAVA 开发手册 1.0.0 阿里巴巴集团技术部 2016.12.7 首次向 Java 业界公开 一、 编程规约(一) 命名规约1. 【强制】所有编程相关命名均不能以下划线或美元符号开始&#xff0c;也不能以下划线或美元符号结束。反例&#xff1a; _name / __name / $Object / name_ / …

2016阿里巴巴73款开源产品全向图

阿里巴巴集团已经开源 115 个项目&#xff0c;加入 FSF 基金会、Apache 基金会、Linux 基金会和 Xen 的顾问团队&#xff0c;并在云栖大会北京峰会宣布 AliSQL 开源。 为了让大家能更好&#xff0c;更全面的了解和应用上阿里开源项目&#xff0c; 云栖社区特别制作了一张“201…

mysql8.0 启动不了mysql_8.0.11版本Mysql遇到MySQL 服务无法启动的解决方法

转&#xff1a;https://blog.csdn.net/iyayaqiqi/article/details/80536110 系统环境&#xff1a;win10(1803),64位 MySQL版本&#xff1a;8.0.11免安装版 MySQL下载地址&#xff1a;https://dev.mysql.com/downloads/mysql&#xff0c;在下载页面往下拉&#xff0c;选择自己的…

CentOS下postgres怎么恢复数据库.bak文件_数据架构选型必读:4月数据库产品技术解析...

本期要点 DB-Engines数据库排行榜 一、RDBMS MySQL发布8.0.20版本&#xff0c;5.6版本于2021年2月停止更新DB2发布11.5.2版本&#xff0c;且看容器化是否可为DB2注入新活力PostgreSQL所有版本的小版本更新到最新版&#xff0c;停止维护9.4OceanBase发布2.2.5版本 二、NoSQL Red…

小麦苗的常用代码--常用命令(仅限自己使用)

小麦苗的常用代码--常用命令(仅限自己使用) 囗 ■ ☑ ● •◆ ※ ☆ ★ ⊙ √ → innobackupex --help -? -h helpy systeminfo | find "系统类型" ----- editplus 替换空行&#xff1a; ^[ \t]*\n EDIT -> DELETE->DELETE BLANK LINES ----- ed…

Spring+SpringMVC+Mybatis分布式敏捷开发系统架构(附源码)

前言 zheng项目不仅仅是一个开发架构&#xff0c;而是努力打造一套从 前端模板 - 基础框架 - 分布式架构 - 开源项目 - 持续集成 - 自动化部署 - 系统监测 - 无缝升级 的全方位J2EE企业级开发解决方案。 项目介绍 基于SpringSpringMVCMybatis分布式敏捷开发系统架构&#xff0c…

2022年最新版 | Flink经典线上问题小盘点

点击上方蓝色字体&#xff0c;选择“设为星标” 回复”面试“获取更多惊喜 本文已经加入「大数据成神之路PDF版」中提供下载。你可以关注公众号&#xff0c;后台回复&#xff1a;「PDF」 即可获取。 2020年和2021年分别写了很多篇类似的文章&#xff0c;这篇文章是关于Flink生产…

Flink CDC我吃定了耶稣也留不住他!| Flink CDC线上问题小盘点

点击上方蓝色字体&#xff0c;选择“设为星标” 回复”面试“获取更多惊喜 我在之前的文章中已经详细的介绍过Flink CDC的原理和实践了。 如果你对Flink CDC 还没有什么概念&#xff0c;可以参考这里&#xff1a;Flink CDC 原理及生产实践。 在实际生产中相信已经有很多小伙伴尝…

MySql的Binlog日志工具分析:Canal、Maxwell、Databus、DTS

点击上方蓝色字体&#xff0c;选择“设为星标” 回复”资源“获取更多资源 大数据技术与架构 点击右侧关注&#xff0c;大数据开发领域最强公众号&#xff01; 暴走大数据 点击右侧关注&#xff0c;暴走大数据&#xff01; Canal 定位&#xff1a;基于数据库增量日志解析&#…

[架构设计]--让你的数据库流动起来 – 利用MySQL Binlog实现流式实时分析架构

感谢原文作者&#xff1a;https://aws.amazon.com/cn/blogs/china/mysql-binlog-architecture/ 数据分析特别是实时数据分析&#xff0c;已经越来越多的成为各行各业的分析要求与标准 – 例如&#xff0c;&#xff08;新&#xff09;零售行业可能希望通过线下POS数据与实时门店…

Flink实战 - Binlog日志并对接Kafka实战

点击上方蓝色字体&#xff0c;选择“设为星标” 回复”资源“获取更多资源 大数据技术与架构 点击右侧关注&#xff0c;大数据开发领域最强公众号&#xff01; 大数据真好玩 点击右侧关注&#xff0c;大数据真好玩&#xff01; 对于 Flink 数据流的处理&#xff0c;一般都是去直…

mysql 无bin_Mysql无Binlog数据恢复

无全量备份、未开启binlog日志&#xff0c;利用percona工具恢复 delete的数据 今天&#xff0c;利用Percona Data Recovery Tool for InnoDB工具(仅支持InnoDB&#xff0c;MyISAM不支持)&#xff0c;可以找回被删除的数据。 原理&#xff1a;在InnoDB引擎&#xff0c;delete删除…

nodejs安装和环境配置

1、node下载 官方下载地址: Node.js 下载node-v16.16.0-x64 2、安装测试 安装一直cmd即可 在主目录下打开cmd node -v 查看node的版本 npm -v 查看npm的版本(新版的node安装自带安装npm) 3、配置全局安装的模块路径和缓存路径 在nodejs文件夹,创建 node_global 和 node_…

Nodejs安装及常见问题

一、安装环境 简单的说 Node.js 就是运行在服务端的 JavaScript。Node.js 是一个基于 Chrome V8 引擎的 JavaScript 运行环境。Node.js 使用了一个事件驱动、非阻塞式 I/O 的模型&#xff0c;使其轻量又高效。Node.js 的包管理器 npm&#xff0c;是全球最大的开源库生态系统。…

nodejs安装的坑后坑

npm改默认位置后报错权限不足 由于不想将npm的模块下载到c盘&#xff0c;虽然某人一直说node才十几兆&#xff0c;但是C盘是真的小&#xff0c;绝对不能放里面了。 本来我的node就是安装在d盘&#xff0c;今天看到了npm改路径的方法&#xff0c;正好就想改一下&#xff0c;没想…

vue安裝及配置 nodejs安装配置

vue安装及配置 vue安装步骤 nodejs安装 安装nodejs环境&#xff1a;https://nodejs.org/en/ 查看node版本&#xff1a;node-v vue3.0需要使用node 8版本以上 npm镜像配置 npm是nodejs内置的资源管理器 npm两个镜像&#xff1a; 淘宝镜像&#xff1a;https://registry.npm.…

win10 Nodejs安装步骤

本人后端 仅供学习参考记录&#xff0c;有不妥之处 望指点。 Nodejs安装步骤 官网 下载安装包 官网地址&#xff1a;https://nodejs.org/zh-cn/ 历史版本地址 Node v14.16.0 (LTS) | Node.js 安装步骤&#xff1a; 双击下载的安装包 安装最新17.2项目有问题 后卸载17.2 …

linux系统宝塔安装nodejs,node安装,nodejs安装,Windows nodejs安装,Linux nodejs安装

node安装&#xff0c;nodejs安装&#xff0c;Windows nodejs安装&#xff0c;Linux nodejs安装 Windows系统&#xff1a; 安装&#xff1a;node-v12.14.0-x86.msi 查看&#xff1a;node -v 返回版本信息&#xff0c;比喻&#xff1a;v0.10.48 Linux系统&#xff1a; 第一种&…

nodejs 安装及环境配置

一、安装nodejs 从nodejs官网找到需版本的nodejs下载。 直接双击下一步安装&#xff0c;建议安装时更换路径&#xff0c;默认使用C盘&#xff0c;我这里更换路径为这个D:\software\nodejs 安装完成之后&#xff0c;检查一下 1.检查node安装版本 命令 node -v2.检查npm版本&a…