目录
1.Docker介绍
1.1 简介:
1.2 Docker的应用场景
1.3 Docker 的优点
1.4 Docker 架构
Docker包括三个基本概念:
Docker和虚拟机区别:
2.Docker安装
2.1 CentOS Docker 安装
2.2卸载旧版本
2.3设置仓库
2.4安装Docker
2.5启动docker
2.6卸载docker
2.7 docker镜像加速
3.Docker常用命令
3.1帮助启动类命令
3.2镜像命令
docker images : 列出本地镜像
docker pull : 从镜像仓库中拉取或者更新指定镜像
docker rmi : 删除本地一个或多个镜像。
docker search : 从Docker Hub查找镜像
docker system df 查看镜像/容器/数据卷所占的空间
3.3容器命令
3.3.1 docker run创建一个新的容器并运行一个命令
3.3.2 docker ps 列出当前所有正在运行的容器
3.3.4启动已停止运行的容器
3.3.5重启容器
3.3.6停止容器
3.3.7强制停止容器
3.3.8删除已停止的容器
3.3.9一次性删除多个容器实例
3.3.10重要
3.3.11查看容器日志
3.3.12查看容器内运行的进程
3.3.13查看容器内部细节
3.3.14进入正在运行的容器并以命令行交互
3.3.15从容器内拷贝文件到主机上
3.3.16导入和导出容器
4.Docker镜像
4.1镜像是什么?
4.2分层的镜像
4.3Docker镜像commit
4.4总结:
4.5镜像推送远程仓库
4.5.1本地镜像发布到阿里云
4.5.2私有仓库创建
5.Docker容器数据卷
5.1作用
5.2特点
5.3案例
5.3.1宿主vs容器之间映射添加容器卷
5.3.2容器和宿主机之间数据共享
5.3.3读写规则映射添加说明
5.3.4卷的继承和共享
6.Docker常规安装
6.1安装tomcat
6.1.1拉取官方镜像
6.1.2使用 tomcat 镜像运行容器
6.2安装mysql
1、查看可用的 MySQL 版本
2、拉取 MySQL 镜像
3、查看本地镜像
4、运行容器—简单版
5、安装成功
6、运行容器—实战版
6.3安装redis
6.3.1 docker hub上拉取redis镜像到本地标签为6.0.8
6.3.2 简单明了启动redis
6.3.3 实际项目创建-目录挂载(配置文件及持久化数据)
6.4安装nginx
6.4.1 docker hub上拉取nginx镜像
6.4.2运行容器
6.4.3 安装成功
6.4.4 验证修改宿主机配置文件
7.Docker复杂安装
7.1安装mysql主从复制
7.3.1主从复制原理
7.3.2主从搭建步骤
7.2安装redis集群
7.2.1 什么是Redis集群
7.2.2 Redis集群的数据分布算法:哈希槽算法
7.2.3 3主3从redis集群配置
7.2.4 主从扩容案例
7.2.5 主从缩容案例
8.DockerFile解析
8.1DockerFile是什么
8.2DockerFile构建过程解析
8.3DockerFile常用保留字指令
8.4DockerFile案例
9.Docker微服务
9.1通过IDEA新建一个普通微服务模块
9.2通过dockerfile发布微服务部署到docker容器
10.Docker网络
10.1Docker网络介绍
10.2常用基本命令
10.3网络模式
10.3.1总体介绍
10.3.2容器实例内默认网络IP生产规则
10.4案例说明
10.4.1bridge
10.4.2host
10.4.3none
10.4.4 container
10.5自定义网络
10.5.1未使用自定义网络
10.5.2使用自定义网络
10.6Docker平台架构图解
11.Docker-compose容器编排
11.1是什么
11.2能做什么
11.3安装下载
11.4Compose核心概念
11.5Compose使用的三个步骤
11.6Compose常用命令
11.7Compose编排微服务
11.7.1新建数据库表
11.7.2项目
11.7.3编写Dockerfile
11.7.4进入主机构建镜像
11.7.5不用Compose
11.7.5使用Compose
12.Docker轻量级可视化工具Portainer
12.1是什么
12.2安装
1.Docker介绍
1.1 简介:
Docker 是一个开源的应用容器引擎,基于 Go 语言 并遵从 Apache2.0 协议开源。
Docker 可以让开发者打包他们的应用以及依赖包到一个轻量级、可移植的容器中,然后发布到任何流行的 Linux 机器上,也可以实现虚拟化。
容器是完全使用沙箱机制,相互之间不会有任何接口(类似 iPhone 的 app),更重要的是容器性能开销极低。
Docker的主要目标是“Build,Ship and Run Any App,Anywhere”,也就是通过对应用组件的封装、分发、部署、运行等生命周期的管理,使用户的APP(可以是一个WEB应用或数据库应用等等)及其运行环境能够做到“一次镜像,处处运行”
解决了运行环境和配置问题的软件容器, 方便做持续集成并有助于整体发布的容器虚拟化技术。
1.2 Docker的应用场景
Web 应用的自动化打包和发布。
自动化测试和持续集成、发布。
在服务型环境中部署和调整数据库或其他的后台应用。
从头编译或者扩展现有的 OpenShift 或 Cloud Foundry 平台来搭建自己的 PaaS 环境。
1.3 Docker 的优点
Docker 是一个用于开发,交付和运行应用程序的开放平台。Docker 使您能够将应用程序与基础架构分开,从而可以快速交付软件。借助 Docker,您可以与管理应用程序相同的方式来管理基础架构。通过利用 Docker 的方法来快速交付,测试和部署代码,您可以大大减少编写代码和在生产环境中运行代码之间的延迟。
1、快速,一致地交付您的应用程序
Docker 允许开发人员使用您提供的应用程序或服务的本地容器在标准化环境中工作,从而简化了开发的生命周期。
容器非常适合持续集成和持续交付(CI / CD)工作流程,请考虑以下示例方案:
您的开发人员在本地编写代码,并使用 Docker 容器与同事共享他们的工作。
他们使用 Docker 将其应用程序推送到测试环境中,并执行自动或手动测试。
当开发人员发现错误时,他们可以在开发环境中对其进行修复,然后将其重新部署到测试环境中,以进行测试和验证。
测试完成后,将修补程序推送给生产环境,就像将更新的镜像推送到生产环境一样简单。
2、响应式部署和扩展
Docker 是基于容器的平台,允许高度可移植的工作负载。Docker 容器可以在开发人员的本机上,数据中心的物理或虚拟机上,云服务上或混合环境中运行。
Docker 的可移植性和轻量级的特性,还可以使您轻松地完成动态管理的工作负担,并根据业务需求指示,实时扩展或拆除应用程序和服务。
3、在同一硬件上运行更多工作负载
Docker 轻巧快速。它为基于虚拟机管理程序的虚拟机提供了可行、经济、高效的替代方案,因此您可以利用更多的计算能力来实现业务目标。Docker 非常适合于高密度环境以及中小型部署,而您可以用更少的资源做更多的事情。
1.4 Docker 架构
Docker包括三个基本概念:
- 镜像(Image):
Docker 镜像(Image),就相当于是一个 root 文件系统。比如官方镜像 ubuntu:16.04 就包含了完整的一套 Ubuntu16.04 最小系统的 root 文件系统。
- 容器(Container):
镜像(Image)和容器(Container)的关系,就像是面向对象程序设计中的类和实例一样,镜像是静态的定义,容器是镜像运行时的实体。容器可以被创建、启动、停止、删除、暂停等。
- 仓库(Repository):
仓库可看成一个代码控制中心,用来保存镜像。
Docker 使用客户端-服务器 (C/S) 架构模式,使用远程API来管理和创建Docker容器。
Docker 容器通过 Docker 镜像来创建。
容器与镜像的关系类似于面向对象编程中的对象与类。
Docker和虚拟机区别:
作为一种轻量级的虚拟化方式,Docker在运行应用上跟传统的虚拟机方式相比具有如下显著优势:
Docker容器很快,启动和停止可以在秒级实现,这相比传统的虚拟机方式(数分钟)要快得多;
Docker容器对系统资源需求很少,一台主机上可以同时运行数千个Docker容器(在IBM服务器上已经实现了同时运行10K量级的容器实例);
Docker通过类似Git设计理念的操作来方便用户获取、分发和更新应用镜像,存储复用,增量更新;
Docker通过Dockerfile支持灵活的自动化创建和部署机制,以提高工作效率,并标准化流程。
Docker容器除了运行其中的应用外,基本不消耗额外的系统资源,在保证应用性能的同时,尽量减小系统开销。传统虚拟机方式运行N个不同的应用就要启用N个虚拟机(每个虚拟机需要单独分配独占的内存、磁盘等资源),而Docker只需要启动N个隔离得“很薄的”容器,并将应用放进容器内即可。应用获得的是接近原生的运行性能。
使用Docker容器技术与传统虚拟机技术的各种特性,可见容器技术在很多应用场景下都具有巨大的优势(如下图所示)
2.Docker安装
2.1 CentOS Docker 安装
Docker 支持以下的 64 位 CentOS 版本:
CentOS 7
CentOS 8
更高版本...
2.2卸载旧版本
较旧的 Docker 版本称为 docker 或 docker-engine 。如果已安装这些程序,请卸载它们以及相关的依赖项
yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine |
2.3设置仓库
安装所需的软件包。yum-utils 提供了 yum-config-manager
yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 |
设置稳定的仓库。
使用官方源地址(比较慢)
yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo |
阿里云
yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo |
清华大学源
yum-config-manager \ --add-repo \ https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo |
2.4安装Docker
安装最新版docker
yum install docker-ce docker-ce-cli containerd.io |
docker-ce 容器引擎-服务器端
docker-ce-cli docker客户点命令行
containerd.io 底层用来启动容器的
安装指定版本Docker
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io |
2.5启动docker
开机自启动
systemctl enable docker |
启动docker
systemctl start docker |
HelloWorld程序
docker run hello-world |
查看版本
docker version |
2.6卸载docker
删除安装包:
yum remove docker-ce |
删除镜像、容器、配置文件等内容:
rm -rf /var/lib/docker |
2.7 docker镜像加速
国内从 DockerHub 拉取镜像有时会遇到困难,此时可以配置镜像加速器。Docker 官方和国内很多云服务商都提供了国内加速器服务,例如:
科大镜像:https://docker.mirrors.ustc.edu.cn/
网易:https://hub-mirror.c.163.com/
阿里云:https://<你的ID>.mirror.aliyuncs.com
七牛云加速器:https://reg-mirror.qiniu.com
当配置某一个加速器地址之后,若发现拉取不到镜像,请切换到另一个加速器地址。国内各大云服务商均提供了 Docker 镜像加速服务,建议根据运行 Docker 的云平台选择对应的镜像加速服务。
请在 /etc/docker/daemon.json 中写入如下内容(如果文件不存在请新建该文件):
{"registry-mirrors":["https://reg-mirror.qiniu.com/"]} |
之后重新启动服务:
$ sudo systemctl daemon-reload $ sudo systemctl restart docker |
3.Docker常用命令
3.1帮助启动类命令
启动docker: systemctl start docker
停止docker: systemctl stop docker
重启docker: systemctl restart docker
查看docker状态: systemctl status docker
开机启动: systemctl enable docker
查看docker概要信息: docker info
查看docker总体帮助文档: docker --help
查看docker命令帮助文档: docker 具体命令 –help
3.2镜像命令
docker images : 列出本地镜像
docker pull : 从镜像仓库中拉取或者更新指定镜像
docker rmi : 删除本地一个或多个镜像。
docker search : 从Docker Hub查找镜像
docker system df 查看镜像/容器/数据卷所占的空间
docker commit: 从容器创建一个新的镜像
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
docker push: 将本地的镜像上传到镜像仓库,要先登陆到镜像仓库
docker push [OPTIONS] NAME[:TAG]
3.3容器命令
有镜像才能创建容器, 这是根本前提(下载一个CentOS或者ubuntu镜像演示)
docker pull centos
docker pull ubuntu
3.3.1 docker run创建一个新的容器并运行一个命令
docker run [OPTIONS] IMAGE [COMMAND] [ARG...] |
OPTIONS说明:
-a stdin: 指定标准输入输出内容类型,可选 STDIN/STDOUT/STDERR 三项;
-d: 后台运行容器,并返回容器ID;
-i: 以交互模式运行容器,通常与 -t 同时使用;
-P: 随机端口映射,容器内部端口随机映射到主机的端口
-p: 指定端口映射,格式为:主机(宿主)端口:容器端口
-t: 为容器重新分配一个伪输入终端,通常与 -i 同时使用;
--name="nginx-lb": 为容器指定一个名称;
--dns 8.8.8.8: 指定容器使用的DNS服务器,默认和宿主一致;
--dns-search example.com: 指定容器DNS搜索域名,默认和宿主一致;
-h "mars": 指定容器的hostname;
-e username="ritchie": 设置环境变量;
--env-file=[]: 从指定文件读入环境变量;
--cpuset="0-2" or --cpuset="0,1,2": 绑定容器到指定CPU运行;
-m :设置容器使用内存最大值;
--net="bridge": 指定容器的网络连接类型,支持 bridge/host/none/container: 四种类型;
--link=[]: 添加链接到另一个容器;
--expose=[]: 开放一个端口或一组端口;
--volume , -v: 绑定一个卷
示例: 使用镜像centos:latest以交互模式启动一个容器,在容器内执行/bin/bash命令。
docker run -it centos /bin/bash |
参数说明: -i: 交互式操作。 -t: 终端。 centos : centos 镜像。 /bin/bash:放在镜像名后的是命令,这里我们希望有个交互式 Shell,因此用的是 /bin/bash。 要退出终端,直接输入 exit: |
3.3.2 docker ps 列出当前所有正在运行的容器
docker ps [OPTIONS]
OPTIONS说明(常用):
-a :列出当前所有正在运行的容器+历史上运行过的
-l :显示最近创建的容器。
-n:显示最近n个创建的容器。
-q :静默模式,只显示容器编号。
退出容器:
两种退出方式
exit
run进去容器,exit退出,容器停止
ctrl+p+q
run进去容器,ctrl+p+q退出,容器不停止
3.3.4启动已停止运行的容器
docker start 容器ID或者容器名
3.3.5重启容器
docker restart 容器ID或者容器名
3.3.6停止容器
docker stop 容器ID或者容器名
3.3.7强制停止容器
docker kill 容器ID或容器名
3.3.8删除已停止的容器
docker rm 容器ID
3.3.9一次性删除多个容器实例
docker rm -f $(docker ps -a -q)
docker ps -a -q | xargs docker rm
3.3.10重要
启动守护式容器(后台服务器)
在大部分的场景下,我们希望 docker 的服务是在后台运行的, 我们可以过 -d 指定容器的后台运行模式
docker run -d 容器名
redis 前后台启动演示
前台交互式启动
docker run -it redis:6.0.8
后台守护式启动
docker run -d redis:6.0.8
3.3.11查看容器日志
docker logs 容器ID
3.3.12查看容器内运行的进程
docker top 容器ID
3.3.13查看容器内部细节
docker inspect 容器ID
3.3.14进入正在运行的容器并以命令行交互
docker exec -it 容器ID bashShell
docker attach 容器ID
exec和attcah区别
attach 直接进入容器启动命令的终端,不会启动新的进程 用exit退出,会导致容器的停止。
exec 是在容器中打开新的终端,并且可以启动新的进程 用exit退出,不会导致容器的停止
推荐大家使用 docker exec 命令,因为退出容器终端,不会导致容器的停止
用之前的redis容器实例进入试试
进入redis服务
docker exec -it 容器ID /bin/bash
docker exec -it 容器ID redis-cli
一般用-d后台启动的程序,再用exec进入对应容器实例
3.3.15从容器内拷贝文件到主机上
- 容器→主机
docker cp 容器ID:容器内路径 目的主机路径
- 主机→容器
docker cp 目的主机路径 容器ID:容器内路径
3.3.16导入和导出容器
导出:如果要导出本地某个容器,可以使用 docker export 命令。
docker export 1e560fca3906 > ubuntu.tar
docker export 容器ID > 文件名.tar |
导入:可以使用 docker import 从容器快照文件中再导入为镜像,以下实例将快照文件 ubuntu.tar 导入到镜像 test/ubuntu:v1:
cat docker/ubuntu.tar | docker import - test/ubuntu:v1
cat 文件名.tar | docker import - 镜像用户/镜像名:镜像版本号 |
4.Docker镜像
4.1镜像是什么?
镜像是一种轻量级、可执行的独立软件包,它包含运行某个软件所需的所有内容,我们把应用程序和配置依赖打包好形成一个可交付的运行环境(包括代码、运行时需要的库、环境变量和配置文件等),这个打包好的运行环境就是image镜像文件。
只有通过这个镜像文件才能生成Docker容器实例(类似Java中new出来一个对象)。
4.2分层的镜像
4.3Docker镜像commit
docker commit提交容器副本使之成为一个新的镜像
docker commit -m="提交的描述信息" -a="作者" 容器ID 要创建的目标镜像名:[标签名]
案例演示ubuntu安装vim
1.从Hub上下载ubuntu镜像到本地并成功运行
2.原始的默认Ubuntu镜像是不带着vim命令的
3.外网连通的情况下,安装vim,docker容器内执行上述两条命令
4.安装完成后,commit我们自己的新镜像
docker commit -m="add vim cmd" -a="xxxxx" 容器ID 要创建的目标镜像名:[标签名]
5.启动我们的新镜像并和原来的对比
官网是默认下载的Ubuntu没有vim命令
自己commit构建的镜像,新增加了vim功能,可以成功使用。
4.4总结:
Docker中的镜像分层,支持通过扩展现有镜像,创建新的镜像。类似Java继承于一个Base基础类,自己再按需扩展。
新镜像是从 base 镜像一层一层叠加生成的。每安装一个软件,就在现有镜像的基础上增加一层
4.5镜像推送远程仓库
4.5.1本地镜像发布到阿里云
4.5.2私有仓库创建
Docker Registry
Docker Registry是官方提供的工具,可以用于构建私有镜像仓库
下载镜像Docker Registry
docker pull registry
运行私有库Registry,相当于本地有个私有Docker hub
docker run -d -p 5000:5000 -v /home/myregistry/:/var/lib/registry --privileged=true
--name myregistry registry
默认情况,仓库被创建在容器的/var/lib/registry目录下,建议自行用容器卷映射,方便于宿主机联调
上传本地镜像到私有仓库
curl验证私服库上有什么镜像
curl -XGET http://hostIp:5000/v2/_catalog
将镜像修改符合私服规范的Tag
按照公式: docker tag 镜像:Tag Host:Port/Repository:Tag
使用命令 docker tag xxxx:1.2 这个镜像修改为hostIP:5000/zzzz:1.2
修改配置文件使之支持http
vim /etc/docker/daemon.json
添加红色部分内容,hostIP注意修改为正确的ip
{ "registry-mirrors": ["https://aa25jngu.mirror.aliyuncs.com"], "insecure-registries": ["hostIP:5000"] } |
docker默认不允许http方式推送镜像,通过配置选项来取消这个限制。====> 修改完后如果不生效,建议重启docker
push推送到私服库
docker push hostIP:5000/zzzzz:1.2
再次验证仓库内容
curl -XGET http://hostIp:5000/v2/_catalog
私有仓库web页面
Harbor 依赖类似redis、mysql、pgsql等很多存储系统,所以它需要编排很多容器协同起来工作,因此在部署和使用时,需要借助于Docker的单机编排工具(Docker compose)来实现。
5.Docker容器数据卷
卷就是目录或文件,存在于一个或多个容器中,由docker挂载到容器,但不属于联合文件系统,因此能够绕过Union File System提供一些用于持续存储或共享数据的特性:
卷的设计目的就是数据的持久化,完全独立于容器的生存周期,因此Docker不会在容器删除时删除其挂载的数据卷
将docker容器内的数据保存进宿主机的磁盘中
运行一个带有容器卷存储功能的容器实例
docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录 镜像名
5.1作用
将应用与运行的环境打包镜像,run后形成容器实例运行 ,但是我们对数据的要求希望是持久化的
Docker容器产生的数据,如果不备份,那么当容器实例删除后,容器内的数据自然也就没有了。
为了能保存数据在docker中我们使用卷
5.2特点
1:数据卷可在容器之间共享或重用数据
2:卷中的更改可以直接实时生效,爽
3:数据卷中的更改不会包含在镜像的更新中
4:数据卷的生命周期一直持续到没有容器使用它为止
5.3案例
5.3.1宿主vs容器之间映射添加容器卷
命令:
docker run -it -v /宿主机目录:/容器内目录 ubuntu /bin/bash
查看数据卷是否挂载成功
docker inspect 容器ID
5.3.2容器和宿主机之间数据共享
1: docker修改,主机同步获得
2: 主机修改,docker同步获得
3: docker容器stop,主机修改,docker容器重启看数据是否同步。
5.3.3读写规则映射添加说明
读写(默认)-rw
docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录:rw 镜像名
只读-ro
容器实例内部被限制,只能读取不能写
docker run -it --privileged=true -v /宿主机绝对路径目录:/容器内目录:rw 镜像名
5.3.4卷的继承和共享
第一步:容器1完成和宿主机的映射
docker run -it --privileged=true -v /mydocker/u:/tmp --name u1 ubuntu
第二步:容器2继承容器1的卷规则
docker run -it --privileged=true --volumes-from u1 --name u2 ubuntu
第三步:验证
- 宿主机修改——》容器1和容器2都变化
- 容器1修改——》宿主机和容器2都变化
- 容器2修改——》宿主机和容器1都变化
- 容器1删除——》宿主机和容器2如何变化
6.Docker常规安装
6.1安装tomcat
查找 Docker Hub 上的 Tomcat 镜像:
此外,我们还可以在控制台使用 docker search tomcat 命令来查看可用版本:
6.1.1拉取官方镜像
docker pull tomcat
查看本地镜像
docker images|grep tomcat
6.1.2使用 tomcat 镜像运行容器
docker run --name tomcat -p 8080:8080 -v /usr/local/tomcat:/usr/local/tomcat -d tomcat |
-p 8080:8080: 将主机的 8080 端口映射到容器的 8080 端口。
-v /usr/local/tomcat:/usr/local/tomcat:将主机目录挂载到容器目录。
查看容器启动情况
docker ps
访问tomcat首页:
使用-P
-P 大写,随机分配端口
docker run --name tomcat -P -v /usr/local/tomcat:/usr/local/tomcat -d tomcat
查看容器启动情况
docker ps
6.2安装mysql
1、查看可用的 MySQL 版本
访问 MySQL 镜像库地址:https://hub.docker.com/_/mysql?tab=tags
2、拉取 MySQL 镜像
这里我们拉取官方的最新版本的镜像:
docker pull mysql:latest
3、查看本地镜像
docker images
4、运行容器—简单版
docker run -itd --name mysql-test -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql |
参数说明:
-p 3306:3306 :映射容器服务的 3306 端口到宿主机的 3306 端口,外部主机可以直接通过 宿主机ip:3306 访问到 MySQL 的服务。
MYSQL_ROOT_PASSWORD=123456:设置 MySQL 服务 root 用户的密码。
5、安装成功
通过 docker ps 命令查看是否安装成功
客户端登录连接,建库、建表、插入数据,插入中文数据
如果此时容器删除,库表数据都会删除
docker上默认字符集编码隐患
docker里面的mysql容器实例查看,内容如下:
SHOW VARIABLES LIKE 'character%'
6、运行容器—实战版
docker run -d -p 3306:3306 --privileged=true -v /xxxx/mysql/log:/var/log/mysql -v /xxxx/mysql/data:/var/lib/mysql -v /xxxx/mysql/conf:/etc/mysql/conf.d -e MYSQL_ROOT_PASSWORD=123456 --name mysql mysql |
在/xxxx/mysql/conf目录新建my.cnf,通过容器卷同步给mysql容器实例
[client] default_character_set=utf8 [mysqld] collation_server = utf8_general_ci character_set_server = utf8 |
重新启动mysql容器实例再重新进入并查看字符编码
docker restart mysql
docker里面的mysql容器实例查看,内容如下:
SHOW VARIABLES LIKE 'character%'
再新建库新建表再插入中文测试
结论:docker安装完MySQL并run出容器后,建议请先修改完字符集编码后再新建mysql库-表-插数据
将当前容器实例删除,再重新来一次
6.3安装redis
6.3.1 docker hub上拉取redis镜像到本地标签为6.0.8
docker pull redis:6.0.8
6.3.2 简单明了启动redis
docker run –d –p 6379:6379 redis: 6.0.8
测试:
Docker exec –it 容器id /bin/bash
Redis-cli
Keys* ping Set k1 v1 get k1
6.3.3 实际项目创建-目录挂载(配置文件及持久化数据)
1. 在CentOS宿主机下新建目录/app/redis
mkdir -p /app/redis
2. 将一个redis.conf文件模板拷贝进/app/redis目录下
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## MODULES #####################################
# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
#
# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#bind 127.0.0.1
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode no
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300
################################# GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
daemonize no
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
always-show-logo yes
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
################################# REPLICATION #################################
# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# +------------------+ +---------------+
# | Master | ---> | Replica |
# | (receive writes) | | (exact copy) |
# +------------------+ +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition replicas automatically try to reconnect to masters
# and resynchronize with them.
#
# replicaof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>
# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
# COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes
# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60
# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.
# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the replica to connect with the master.
#
# Port: The port is communicated by the replica during the replication
# handshake, and is normally the port that the replica is using to
# listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.
################################### CLIENTS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
############################## MEMORY MANAGEMENT ################################
# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5
# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes
############################# LAZY FREEING ####################################
# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
# in order to make room for new data, without going over the specified
# memory limit.
# 2) Because of expire: when a key with an associated time to live (see the
# EXPIRE command) must be deleted from memory.
# 3) Because of a side effect of a command that stores data on a key that may
# already exist. For example the RENAME command may delete the old key
# content when it is replaced with another one. Similarly SUNIONSTORE
# or SORT with STORE option may delete existing keys. The SET command
# itself removes any old content of the specified key in order to replace
# it with the specified string.
# 4) During replication, when a replica performs a full resynchronization with
# its master, the content of the whole database is removed in order to
# load the RDB file just transferred.
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
# [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
# A replica of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple replicas able to failover, they exchange messages
# in order to try to give an advantage to the replica with the best
# replication offset (more data from the master processed).
# Replicas will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single replica computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the replica will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-replica-validity-factor 10
# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
#
# cluster-replica-no-failover no
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
########################## CLUSTER DOCKER/NAT support ########################
# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
#notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
stream-node-max-bytes 4096
stream-node-max-entries 100
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# replica -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
#
# client-query-buffer-limit 1gb
# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
#
# proto-max-bulk-len 512mb
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
dynamic-hz yes
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
rdb-save-incremental-fsync yes
# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits |
# +--------+------------+------------+------------+------------+------------+
# | 0 | 104 | 255 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 1 | 18 | 49 | 255 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 10 | 10 | 18 | 142 | 255 | 255 |
# +--------+------------+------------+------------+------------+------------+
# | 100 | 8 | 11 | 49 | 143 | 255 |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
#
# redis-benchmark -n 1000000 incr foo
# redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
#
# lfu-log-factor 10
# lfu-decay-time 1
########################### ACTIVE DEFRAGMENTATION #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
#
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
#
# Important things to understand:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
# to use the copy of Jemalloc we ship with the source code of Redis.
# This is the default with Linux builds.
#
# 2. You never need to enable this feature if you don't have fragmentation
# issues.
#
# 3. Once you experience fragmentation, you can enable this feature when
# needed with the command "CONFIG SET activedefrag yes".
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.
# Enabled active defragmentation
# activedefrag yes
# Minimum amount of fragmentation waste to start active defrag
# active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
# active-defrag-threshold-lower 10
# Maximum percentage of fragmentation at which we use maximum effort
# active-defrag-threshold-upper 100
# Minimal effort for defrag in CPU percentage
# active-defrag-cycle-min 5
# Maximal effort for defrag in CPU percentage
# active-defrag-cycle-max 75
# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# active-defrag-max-scan-fields 1000
3. /app/redis目录下修改redis.conf文件
3.1开启redis验证
requirepass 123 可选
4.允许redis外地连接 必须
注释掉 # bind 127.0.0.1
5.daemonize no
将daemonize yes注释起来或者 daemonize no设置,因为该配置和docker run中 -d 参数冲突,会导致容器一直启动失败
6. 开启redis数据持久化
appendonly yes 可选
7.使用redis6.0.8镜像创建容器
docker run -p 6379:6379 --name myredis --privileged=true -v /app/redis/redis.conf:/etc/redis/redis.conf -v /app/redis/data:/data -d redis:6.0.8 redis-server /etc/redis/redis.conf |
8.测试redis-cli
docker exec -it 运行着Redis服务的容器ID/名称
9. 证明docker启动使用了我们自己指定的配置文件
9.1 修改前使用select 15 命令查询<redis默认16db>
9.2 宿主机修改redis.conf文件
databases 10
宿主机的修改会同步给docker容器里面的配置,重启容器
9.3 测试redis-cli docker exec -it 运行着Redis服务的容器ID/名称
6.4安装nginx
6.4.1 docker hub上拉取nginx镜像
docker pull nginx
6.4.2运行容器
docker run --name my-nginx -v /home/nginx/nginx.conf:/etc/nginx/nginx.conf -d nginx
6.4.3 安装成功
通过浏览器可以直接访问 8080 端口的 nginx 服务
6.4.4 验证修改宿主机配置文件
修改nginx 配置文件路径
在/home/nginx/nginx.conf中修改日志文件路径
重启容器
进入容器内容,查看日志路径是否修改
7.Docker复杂安装
7.1安装mysql主从复制
7.3.1主从复制原理
MySQL 的主从复制是依赖于 binlog,也就是记录 MySQL 上的所有变化并以二进制形式保存在磁盘上二进制日志文件。
主从复制就是将 binlog 中的数据从主库传输到从库上,一般这个过程是异步的,即主库上的操作不会等待 binlog 同步地完成。
详细流程如下:
1.主库写 binlog:主库的更新 SQL(update、insert、delete) 被写到 binlog;
2.主库发送 binlog:主库创建一个 log dump 线程来发送 binlog 给从库;
3.从库写 relay log:从库在连接到主节点时会创建一个 IO 线程,以请求主库更新的 binlog,并且把接收到的 binlog 信息写入一个叫做 relay log 的日志文件;
4.从库回放:从库还会创建一个 SQL 线程读取 relay log 中的内容,并且在从库中做回放,最终实现主从的一致性。
7.3.2主从搭建步骤
新建主服务器容器实例3307
docker run -p 3307:3306 --name mysql-master \ -v /mydata/mysql-master/log:/var/log/mysql \ -v /mydata/mysql-master/data:/var/lib/mysql \ -v /mydata/mysql-master/conf:/etc/mysql \ -e MYSQL_ROOT_PASSWORD=root \ -d mysql:5.7 |
进入/mydata/mysql-master/conf 目录下新建my.cnf
vim my.cnf
[mysqld] ## 设置server_id,同一局域网中需要唯一 server_id=101 ## 指定不需要同步的数据库名称 binlog-ignore-db=mysql ## 开启二进制日志功能 log-bin=mall-mysql-bin ## 设置二进制日志使用内存大小(事务) binlog_cache_size=1M ## 设置使用的二进制日志格式(mixed,statement,row) binlog_format=mixed ## 二进制日志过期清理时间。默认值为0,表示不自动清理。 expire_logs_days=7 ## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断。 ## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致 slave_skip_errors=1062 |
修改完配置后重启master实例
docker restart mysql-master
进入mysql-master容器
docker exec -it mysql-master /bin/bash
mysql -uroot –proot
master容器实例内创建数据同步用户
CREATE USER 'slave'@'%' IDENTIFIED BY '123456';
GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'slave'@'%';
新建从服务器容器实例3308
docker run -p 3308:3306 --name mysql-slave \ -v /mydata/mysql-slave/log:/var/log/mysql \ -v /mydata/mysql-slave/data:/var/lib/mysql \ -v /mydata/mysql-slave/conf:/etc/mysql \ -e MYSQL_ROOT_PASSWORD=root \ -d mysql:5.7 |
进入/mydata/mysql-slave/conf目录下新建my.cnf
vim my.cnf
[mysqld] ## 设置server_id,同一局域网中需要唯一 server_id=102 ## 指定不需要同步的数据库名称 binlog-ignore-db=mysql ## 开启二进制日志功能,以备Slave作为其它数据库实例的Master时使用 log-bin=mall-mysql-slave1-bin ## 设置二进制日志使用内存大小(事务) binlog_cache_size=1M ## 设置使用的二进制日志格式(mixed,statement,row) binlog_format=mixed ## 二进制日志过期清理时间。默认值为0,表示不自动清理。 expire_logs_days=7 ## 跳过主从复制中遇到的所有错误或指定类型的错误,避免slave端复制中断 ## 如:1062错误是指一些主键重复,1032错误是因为主从数据库数据不一致 slave_skip_errors=1062 ## relay_log配置中继日志 relay_log=mall-mysql-relay-bin ## log_slave_updates表示slave将复制事件写进自己的二进制日志 log_slave_updates=1 ## slave设置为只读(具有super权限的用户除外) read_only=1 |
修改完配置后重启slave实例
docker restart mysql-slave
在主数据库中查看主从同步状态
show master status;
进入mysql-slave容器
docker exec -it mysql-slave /bin/bash
mysql -uroot -proot
在从数据库中配置主从复制
change master to master_host='宿主机ip', master_user='slave', master_password='123456', master_port=3307, master_log_file='mall-mysql-bin.000001', master_log_pos=617, master_connect_retry=30; |
主从复制命令参数说明
master_host:主数据库的IP地址;
master_port:主数据库的运行端口;
master_user:在主数据库创建的用于同步数据的用户账号;
master_password:在主数据库创建的用于同步数据的用户密码;
master_log_file:指定从数据库要复制数据的日志文件,通过查看主数据的状态,获取File参数;
master_log_pos:指定从数据库从哪个位置开始复制数据,通过查看主数据的状态,获取Position参数;
master_connect_retry:连接失败重试的时间间隔,单位为秒。
在从数据库中查看主从同步状态
show slave status \G;
在从数据库中开启主从同步
start slave
查看从数据库状态发现已经同步
主从复制测试
主机新建库-使用库-新建表-插入数据,ok
从机使用库-查看记录,ok
7.2安装redis集群
7.2.1 什么是Redis集群
Redis3.0加入了Redis的集群模式,实现了数据的分布式存储,对数据进行分片,将不同的数据存储在不同的master节点上面,从而解决了海量数据的存储问题。
Redis集群采用去中心化的思想,没有中心节点的说法,对于客户端来说,整个集群可以看成一个整体,可以连接任意一个节点进行操作,就像操作单一Redis实例一样,不需要任何代理中间件,当客户端操作的key没有分配到该node上时,Redis会返回转向指令,指向正确的node。
Redis也内置了高可用机制,支持N个master节点,每个master节点都可以挂载多个slave节点,当master节点挂掉时,集群会提升它的某个slave节点作为新的master节点
如上图所示,Redis集群可以看成多个主从架构组合起来的,每一个主从架构可以看成一个节点(其中,只有master节点具有处理请求的能力,slave节点主要是用于节点的高可用)
7.2.2 Redis集群的数据分布算法:哈希槽算法
Redis集群采用的算法是哈希槽分区算法。Redis集群中有16384个哈希槽(槽的范围是 0 -16383,哈希槽),将不同的哈希槽分布在不同的Redis节点上面进行管理,也就是说每个Redis节点只负责一部分的哈希槽。在对数据进行操作的时候,集群会对使用CRC16算法对key进行计算并对16384取模(slot = CRC16(key)%16383),得到的结果就是 Key-Value 所放入的槽,通过这个值,去找到对应的槽所对应的Redis节点,然后直接到这个对应的节点上进行存取操作
哈希槽数据分区算法具有以下几种特点:
- 解耦数据和节点之间的关系,简化了扩容和收缩难度;
- 节点自身维护槽的映射关系,不需要客户端代理服务维护槽分区元数据
- 支持节点、槽、键之间的映射查询,用于数据路由,在线伸缩等场景
哈希槽计算
Redis 集群中内置了 16384 个哈希槽,redis 会根据节点数量大致均等的将哈希槽映射到不同的节点。当需要在 Redis 集群中放置一个 key-value时,redis 先对 key 使用 crc16 算法算出一个结果,然后把结果对 16384 求余数,这样每个 key 都会对应一个编号在 0-16383 之间的哈希槽,也就是映射到某个节点上。如下代码,key之A 、B在Node2, key之C落在Node3上
7.2.3 3主3从redis集群配置
关闭防火墙+启动docker后台服务
新建6个docker容器redis实例
docker run -d --name redis-node-1 --net host --privileged=true -v /data/redis/share/redis-node-1:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6381docker run -d --name redis-node-2 --net host --privileged=true -v /data/redis/share/redis-node-2:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6382docker run -d --name redis-node-3 --net host --privileged=true -v /data/redis/share/redis-node-3:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6383docker run -d --name redis-node-4 --net host --privileged=true -v /data/redis/share/redis-node-4:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6384docker run -d --name redis-node-5 --net host --privileged=true -v /data/redis/share/redis-node-5:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6385docker run -d --name redis-node-6 --net host --privileged=true -v /data/redis/share/redis-node-6:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6386
命令解释
docker run
创建并运行docker容器实例
--name redis-node-6
容器名字
--net host
使用宿主机的IP和端口,默认
--privileged=true
获取宿主机root用户权限
-v /data/redis/share/redis-node-6:/data
容器卷,宿主机地址:docker内部地址
redis:6.0.8
redis镜像和版本号
--cluster-enabled yes
开启redis集群
--appendonly yes
开启持久化
--port 6386
redis端口号
进入容器redis-node-1并为6台机器构建集群关系
进入容器
docker exec -it redis-node-1 /bin/bash
构建主从关系
//注意,进入docker容器后才能执行一下命令,且注意自己的真实IP地址
redis-cli --cluster create 192.168.111.147:6381 192.168.111.147:6382 192.168.111.147:6383 192.168.111.147:6384 192.168.111.147:6385 192.168.111.147:6386 --cluster-replicas 1 |
--cluster-replicas 1 表示为每个master创建一个slave节点
6381作为切入点,查看集群状态
cluster info
6381作为切入点,查看节点状态
cluster nodes
主从容错切换迁移案例
数据读写存储:
对6381新增两个key,防止路由失效加参数-c并新增两个key
redis-cli -p 6381 -c
set k1 v1
set k2 v2
查询k1 k2 内容
get k1
get k2
查看集群信息-任意节点都可以
redis-cli --cluster check 192.168.111.147:6381
容错切换迁移
主6381和从机切换,先停止主机6381
6381主机停了,对应的真实从机上位
6381作为1号主机分配的从机以实际情况为准,具体是几号机器就是几号
再次查看集群信息
从 node-2 进入集群
docker exec -it redis-node-2 /bin/bash
redis-cli -p 6382 –cl
cluster nodes
6381宕机了,63xx上位成为了新的master
还原之前的3主3从
其中x为具体分配的从节点
docker start redis-node-1
docker stop redis-node-x
docker start redis-node-x
查看集群状态
redis-cli --cluster check 自己IP:6381
7.2.4 主从扩容案例
以上节部署的为基础
新建6387、6388两个节点+新建后启动+查看是否8节点
docker run -d --name redis-node-7 --net host --privileged=true -v /data/redis/share/redis-node-7:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6387 docker run -d --name redis-node-8 --net host --privileged=true -v /data/redis/share/redis-node-8:/data redis:6.0.8 --cluster-enabled yes --appendonly yes --port 6388 |
docker ps
进入6387容器实例内部
docker exec -it redis-node-7 /bin/bash
将新增的6387节点(空槽号)作为master节点加入原集群
redis-cli --cluster add-node 自己实际IP地址:6387 自己实际IP地址:6381
6387 就是将要作为master新增节点
6381 就是原来集群节点里面的领路人
检查集群情况第1次
redis-cli --cluster check 真实ip地址:6381
重新分派槽号
之前是3个master平分16384个槽位,现在是4个mater平分
16384/4(master台数) = 4096
redis-cli --cluster reshard IP地址:端口号
redis-cli --cluster reshard 192.168.111.147:6381
How many slots do you want to move (from 1 to 16384) ? 4096 #意思第四个mater需要这么多槽位
What is the receiving node ID? #输入新增的Master节点id
Please enter all the source node IDs
Source node #1: all #意思要从全部的槽位点分出4096个给receiving node ID
检查集群情况第2次
redis-cli --cluster check 真实ip地址:6381
槽号分派说明
为什么6387是3个新的区间,以前的还是连续?
重新分配成本太高,所以前3家各自匀出来一部分,从三个master旧节点分别匀出1364个槽位给新节点6387
为主节点6387分配从节点6388
redis-cli --cluster add-node ip:新slave端口 ip:新master端口 --cluster-slave --cluster-master-id 新主机节点ID
redis-cli --cluster add-node 192.168.111.147:6388 192.168.111.147:6387 --cluster-slave --cluster-master-id e4781f644d4a4e4d4b4d107157b9ba8144631451-------这个是6387的编号,按照自己实际情况
检查集群情况第3次
redis-cli --cluster check 192.168.111.147:6382
7.2.5 主从缩容案例
目的:6387和6388下线
检查集群情况1获得6388的节点ID
redis-cli --cluster check 192.168.111.147:6382
将6388删除 从集群中将4号从节点6388删除
redis-cli --cluster del-node ip:从机端口 从机6388节点ID
redis-cli --cluster del-node 192.168.111.147:6388 5d149074b7e57b802287d1797a874ed7a1a284a8
检查集群情况
redis-cli --cluster check 192.168.111.147:6382
将6387的槽号清空,重新分配,本例将清出来的槽号都给6381
redis-cli --cluster reshard 192.168.111.147:6381
How many slots do you want to move (from 1 to 16384) ? 4096 #意思将第四个mater槽位全部分出去
What is the receiving node ID? #输入接收4096个的槽位的新节点ID,本例全部给master1
Please enter all the source node IDs
Source node #1: 第四个master 节点的id
Source node #1: done
检查集群情况第二次
redis-cli --cluster check 192.168.111.147:6381
4096个槽位都指给6381,它变成了8192个槽位,相当于全部都给6381了,不然要输入3次
将6387删除
redis-cli --cluster del-node ip:端口 6387节点ID
redis-cli --cluster del-node 192.168.111.147:6387 e4781f644d4a4e4d4b4d107157b9ba8144631451
检查集群情况第三次
redis-cli --cluster check 192.168.111.147:6381
8.DockerFile解析
8.1DockerFile是什么
Dockerfile是用来构建Docker镜像的文本文件,是由一条条构建镜像所需的指令和参数构成的脚本。
构建三步骤
编写Dockerfile文件
docker build命令构建镜像
docker run依镜像运行容器实例
8.2DockerFile构建过程解析
Dockerfile内容基础知识
1:每条保留字指令都必须为大写字母且后面要跟随至少一个参数
2:指令按照从上到下,顺序执行
3:#表示注释
4:每条指令都会创建一个新的镜像层并对镜像进行提交
Docker执行Dockerfile的大致流程
(1)docker从基础镜像运行一个容器
(2)执行一条指令并对容器作出修改
(3)执行类似docker commit的操作提交一个新的镜像层
(4)docker再基于刚提交的镜像运行一个新容器
(5)执行dockerfile中的下一条指令直到所有指令都执行完成
8.3DockerFile常用保留字指令
FROM
基础镜像,当前新镜像是基于哪个镜像的,指定一个已经存在的镜像作为模板,第一条必须是from
MAINTAINER
镜像维护者的姓名和邮箱地址
RUN
容器构建时需要运行的命令
两种格式
shell格式
RUN<命令行> 等同于在linux终端执行命令
RUN yum -y install vim
exec格式
RUN ["可执行文件", "参数1", "参数2"]
# RUN ["./test.php", "dev", "offline"] 等价于 RUN ./test.php dev offline
EXPOSE
当前容器对外暴露出的端口
WORKDIR
指定在创建容器后,终端默认登陆的进来工作目录,一个落脚点
USER
指定该镜像以什么样的用户去执行,如果都不指定,默认是root
ENV
用来在构建镜像过程中设置环境变量
ENV MY_PATH /usr/mytest
这个环境变量可以在后续的任何RUN指令中使用,这就如同在命令前面指定了环境变量前缀一样;也可以在其它指令中直接使用这些环境变量,
ADD
将宿主机目录下的文件拷贝进镜像且会自动处理URL和解压tar压缩包
COPY
类似ADD,拷贝文件和目录到镜像中。 将从构建上下文目录中 <源路径> 的文件/目录复制到新的一层的镜像内的 <目标路径> 位置
COPY 源路径 目标路径
COPY ["src", "dest"]
<src源路径>:源文件或者源目录
<dest目标路径>:容器内的指定路径,该路径不用事先建好,路径不存在的话,会自动创建
VOLUME
容器数据卷,用于数据保存和持久化工作
CMD
指定容器启动后的要干的事情
注意:Dockerfile 中可以有多个 CMD 指令,但只有最后一个生效,CMD 会被 docker run 之后的参数替换
CMD和RUN命令的区别
CMD是在docker run 时运行。
RUN是在 docker build时运行
ENTRYPOINT
也是用来指定一个容器启动时要运行的命令
类似于 CMD 指令,但是ENTRYPOINT不会被docker run后面的命令覆盖, 而且这些命令行参数会被当作参数送给 ENTRYPOINT 指令指定的程序
命令格式和案例说明
ENTRYPOINT ["<executeable>","<param1>","<param2>",...]
可以搭配 CMD 命令使用:一般是变参才会使用 CMD ,这里的 CMD 等于是在给 ENTRYPOINT 传参,以下示例会提到。
示例:
假设已通过 Dockerfile 构建了 nginx:test 镜像:
FROM nginx
ENTRYPOINT ["nginx", "-c"] # 定参
CMD ["/etc/nginx/nginx.conf"] # 变参
1、不传参运行
$ docker run nginx:test
容器内会默认运行以下命令,启动主进程。
nginx -c /etc/nginx/nginx.conf
2、传参运行
$ docker run nginx:test -c /etc/nginx/new.conf
容器内会默认运行以下命令,启动主进程(/etc/nginx/new.conf:假设容器内已有此文件)
nginx -c /etc/nginx/new.conf
8.4DockerFile案例
自定义镜像mycentosjava8
Centos7镜像具备vim+ifconfig+jdk8
下载jdk8 https://mirrors.yangxingzhen.com/jdk/
编写Dockerfile文件
FROM centos MAINTAINER aabb ENV MYPATH /usr/local WORKDIR $MYPATH #安装vim编辑器 RUN yum -y install vim #安装ifconfig命令查看网络IP RUN yum -y install net-tools #安装java8及lib库 RUN yum -y install glibc.i686 RUN mkdir /usr/local/java #ADD 是相对路径jar,把jdk-8u171-linux-x64.tar.gz添加到容器中,安装包必须要和Dockerfile文件在同一位置 ADD jdk-8u171-linux-x64.tar.gz /usr/local/java/ #配置java环境变量 ENV JAVA_HOME /usr/local/java/jdk1.8.0_171 ENV JRE_HOME $JAVA_HOME/jre ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH ENV PATH $JAVA_HOME/bin:$PATH EXPOSE 80 CMD echo $MYPATH CMD echo "success--------------ok" CMD /bin/bash |
构建
docker build -t 新镜像名字:TAG
docker build -t centosjava8:1.5 .
运行
docker run -it 新镜像名字:TAG
docker run -it centosjava8:1.5 /bin/bash
进入容器检查vim ifconfig java命令
9.Docker微服务
9.1通过IDEA新建一个普通微服务模块
通过官网新建一个springboot https://start.spring.io/
写YML
server.port=6001
业务类
import org.springframework.beans.factory.annotation.Value; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.RestController; import java.util.UUID; @RestController public class OrderController { @Value("${server.port}") private String port; @RequestMapping("/order/docker") public String helloDocker() { return "hello docker"+"\t"+port+"\t"+ UUID.randomUUID().toString(); } @RequestMapping(value ="/order/index",method = RequestMethod.GET) public String index() { return "服务端口号: "+"\t"+port+"\t"+UUID.randomUUID().toString(); } } |
9.2通过dockerfile发布微服务部署到docker容器
编译打成jar包
编写Dockerfile
# 基础镜像使用java FROM java:8 # 作者 MAINTAINER xxx # VOLUME 指定临时文件目录为/tmp,在主机/var/lib/docker目录下创建了一个临时文件并链接到容器的/tmp VOLUME /tmp # 将jar包添加到容器中并更名为zzyy_docker.jar ADD xxxxx-0.0.1-SNAPSHOT.jar app.jar # 运行jar包 RUN bash -c 'touch /app.jar' ENTRYPOINT ["java","-jar","/app.jar"] #暴露6001端口作为微服务 EXPOSE 6001 |
将微服务jar包和Dockerfile文件上传到同一个目录下/mydocker
docker build -t 新镜像名称:tag .
运行容器
docker run -d -p 6001:6001 新镜像名称:tag
访问测试
主机ip:6001/order/docker
10.Docker网络
10.1Docker网络介绍
1. Docker默认提供了3种网络模式,生成容器时不指定网络模式下默认使用bridge桥接模式。
2. 容器间的互联和通信以及端口映射
3. 容器IP变动时候可以通过服务名直接网络通信而不受到影响
10.2常用基本命令
查看网络
docker network ls
查看网络源数据
docker network inspect XXX网络名字
删除网络
docker network rm XXX网络名字
10.3网络模式
10.3.1总体介绍
bridge模式:使用--network bridge指定,默认使用docker0
host模式:使用--network host指定
none模式:使用--network none指定
container模式:使用--network container:NAME或者容器ID指定
10.3.2容器实例内默认网络IP生产规则
1 先启动两个ubuntu容器实例
docker run –it –name u1 ubuntu bash
docker run –it –name u2 ubuntu bash
docker ps
2 docker inspect 容器ID or 容器名字
docker inspect u1 | tail -n 20
docker inspect u2 | tail -n 20
3 关闭u2实例,新建u3,查看ip变化
docker rm -f u2
docker run –it –name u3 ubuntu bash
docker inspect u3 | tail -n 20
结论:docker容器内部的ip是有可能会发生改变的
10.4案例说明
10.4.1bridge
Docker 服务默认会创建一个 docker0 网桥(其上有一个 docker0 内部接口),该桥接网络的名称为docker0,它在内核层连通了其他的物理或虚拟网卡,这就将所有容器和本地主机都放到同一个物理网络。Docker 默认指定了 docker0 接口 的 IP 地址和子网掩码,让主机和容器之间可以通过网桥相互通信。
查看 bridge 网络的详细信息,并通过 grep 获取名称项
docker network inspect bridge | grep name
ifconfig | grep docker
1: Docker使用Linux桥接,在宿主机虚拟一个Docker容器网桥(docker0),Docker启动一个容器时会根据Docker网桥的网段分配给容器一个IP地址,称为Container-IP,同时Docker网桥是每个容器的默认网关。因为在同一宿主机内的容器都接入同一个网桥,这样容器之间就能够通过容器的Container-IP直接通信。
2:docker run 的时候,没有指定network的话默认使用的网桥模式就是bridge,使用的就是docker0。在宿主机ifconfig,就可以看到docker0和自己create的network(后面讲)eth0,eth1,eth2……代表网卡一,网卡二,网卡三……,lo代表127.0.0.1,即localhost,inet addr用来表示网卡的IP地址
3:网桥docker0创建一对对等虚拟设备接口一个叫veth,另一个叫eth0,成对匹配。
3.1 整个宿主机的网桥模式都是docker0,类似一个交换机有一堆接口,每个接口叫veth,在本地主机和容器内分别创建一个虚拟接口,并让他们彼此联通(这样一对接口叫veth pair);
3.2 每个容器实例内部也有一块网卡,每个接口叫eth0;
3.3 docker0上面的每个veth匹配某个容器实例内部的eth0,两两配对,一一匹配。
通过上述,将宿主机上的所有容器都连接到这个内部网络上,两个容器在同一个网络下,会从这个网关下各自拿到分配的ip,此时两个容器的网络是互通的。
docker run -d -p 8081:8080 --name tomcat81 billygoo/tomcat8-jdk8
docker run -d -p 8082:8080 --name tomcat82 billygoo/tomcat8-jdk8
验证主机网卡和两个容器网卡
使用ip addr 命令分别查询主机和容器的网卡信息
10.4.2host
是什么?:
直接使用宿主机的 IP 地址与外界进行通信,不再需要额外进行NAT 转换。
容器将不会获得一个独立的Network Namespace, 而是和宿主机共用一个Network Namespace。容器将不会虚拟出自己的网卡而是使用宿主机的IP和端口。
以host网络启动容器:
docker run -d -p 8083:8080 --network host --name tomcat83 billygoo/tomcat8-jdk8
告警:
问题:
docke启动时总是遇见标题中的警告
原因:
docker启动时指定--network=host或-net=host,如果还指定了-p映射端口,那这个时候就会有此警告
并且通过-p设置的参数将不会起到任何作用,端口号会以主机端口号为主,重复时则递增。
Host网络正常启动
docker run -d --network host --name tomcat83 billygoo/tomcat8-jdk8
查看容器信息
docker inspect u1 | tail -n 20
访问
http://宿主机IP:8080/
10.4.3none
在none模式下,并不为Docker容器进行任何网络配置。
也就是说,这个Docker容器没有网卡、IP、路由等信息,只有一个lo
需要我们自己为Docker容器添加网卡、配置IP等。
禁用网络功能,只有lo标识(就是127.0.0.1表示本地回环)
以none网络启动tomcat
docker run -d -p 8084:8080 --network none --name tomcat84 billygoo/tomcat8-jdk8
进入容器内部查看
docker exec -it 容器id bash
在容器外部查看
docker inspect 容器id/容器名称 |tail -n 20
只能在容器内访问
10.4.4 container
是什么
新建的容器和已经存在的一个容器共享一个网络ip配置而不是和宿主机共享。新创建的容器不会创建自己的网卡,配置自己的IP,而是和一个指定的容器共享IP、端口范围等。同样,两个容器除了网络方面,其他的如文件系统、进程列表等还是隔离的。
实战
docker run -d -p 8085:8080 --name tomcat85 billygoo/tomcat8-jdk8
docker run -d -p 8086:8080 --network container:tomcat85 --name tomcat86 billygoo/tomcat8-jdk8
运行结果
相当于tomcat86和tomcat85公用同一个ip同一个端口,导致端口冲突
Alpine
Alpine操作系统是一个面向安全的轻型 Linux发行版
运行
docker run -it --name alpine1 alpine /bin/sh
docker run -it --network container:alpine1 --name alpine2 alpine /bin/sh
运行结果,验证共用搭桥
ip addr
此时关闭alpine1,再看看alpine2
10.5自定义网络
10.5.1未使用自定义网络
新建容器
docker run -d -p 8081:8080 --name tomcat81 billygoo/tomcat8-jdk8
docker run -d -p 8082:8080 --name tomcat82 billygoo/tomcat8-jdk8
上述成功启动并用docker exec进入各自容器实例内部
按照ip ping
ping xx.xx.xx.xx
按照服务名 ping
ping tomcat2
10.5.2使用自定义网络
新建自定义网络
docker network create xxyyzz
docker network ps
新建容器加入上一步新建的自定义网络
docker run -d -p 8081:8080 --network xxyyzz --name tomcat81 billygoo/tomcat8-jdk8
docker run -d -p 8082:8080 --network xxyyzz --name tomcat82 billygoo/tomcat8-jdk8
互相ping测试
按照ip ping
ping xx.xx.xx.xx
按照服务名 ping
ping tomcat2
自定义网络本身就维护好了主机名和ip的对应关系(ip和域名都能通)
10.6Docker平台架构图解
整体说明:
从其架构和运行流程来看,Docker 是一个 C/S 模式的架构,后端是一个松耦合架构,众多模块各司其职。
Docker 运行的基本流程为:
1 用户是使用 Docker Client 与 Docker Daemon 建立通信,并发送请求给后者。
2 Docker Daemon 作为 Docker 架构中的主体部分,首先提供 Docker Server 的功能使其可以接受 Docker Client 的请求。
3 Docker Engine 执行 Docker 内部的一系列工作,每一项工作都是以一个 Job 的形式的存在。
4 Job 的运行过程中,当需要容器镜像时,则从 Docker Registry 中下载镜像,并通过镜像管理驱动 Graph driver将下载镜像以Graph的形式存储。
5 当需要为 Docker 创建网络环境时,通过网络管理驱动 Network driver 创建并配置 Docker 容器网络环境。
6 当需要限制 Docker 容器运行资源或执行用户指令等操作时,则通过 Execdriver 来完成。
7 Libcontainer是一项独立的容器管理包,Network driver以及Exec driver都是通过Libcontainer来实现具体对容器进行的操作。
整体架构:
11.Docker-compose容器编排
11.1是什么
Compose 是 Docker 公司推出的一个工具软件,可以管理多个 Docker 容器组成一个应用。你需要定义一个 YAML 格式的配置文件docker-compose.yml,写好多个容器之间的调用关系。然后,只要一个命令,就能同时启动/关闭这些容器
Docker-Compose是Docker官方的开源项目, 负责实现对Docker容器集群的快速编排。
11.2能做什么
docker建议我们每一个容器中只运行一个服务,因为docker容器本身占用资源极少,所以最好是将每个服务单独的分割开来但是这样我们又面临了一个问题?
如果需要同时部署好多个服务,难道要每个服务单独写Dockerfile然后在构建镜像,构建容器,这样很累,所以docker官方给我们提供了docker-compose多服务部署的工具
例如要实现一个Web微服务项目,除了Web服务容器本身,往往还需要再加上后端的数据库mysql服务容器,redis服务器,注册中心eureka,甚至还包括负载均衡容器等等。。。。。。
Compose允许用户通过一个单独的docker-compose.yml模板文件(YAML 格式)来定义一组相关联的应用容器为一个项目(project)。
可以很容易地用一个配置文件定义一个多容器的应用,然后使用一条指令安装这个应用的所有依赖,完成构建。Docker-Compose 解决了容器与容器之间如何管理编排的问题。
11.3安装下载
官网
https://docs.docker.com/compose/compose-file/compose-file-v3/
官网下载
https://docs.docker.com/compose/install/
安装步骤
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose --version
卸载步骤
curl方式安装卸载
sudo rm /usr/local/bin/docker-compose
11.4Compose核心概念
一文件
docker-compose.yml
两要素
服务(service)
一个个应用容器实例,比如订单微服务、库存微服务、mysql容器、nginx容器或者redis容器
工程(project)
由一组关联的应用容器组成的一个完整业务单元,在 docker-compose.yml 文件中定义。
11.5Compose使用的三个步骤
编写Dockerfile定义各个微服务应用并构建出对应的镜像文件
使用 docker-compose.yml 定义一个完整业务单元,安排好整体应用中的各个容器服务。
最后,执行docker-compose up命令 来启动并运行整个应用程序,完成一键部署上线
11.6Compose常用命令
docker-compose -h # 查看帮助
docker-compose up # 启动所有docker-compose服务
docker-compose up -d # 启动所有docker-compose服务并后台运行
docker-compose down # 停止并删除容器、网络、卷、镜像。
docker-compose exec yml里面的服务id # 进入容器实例内部 docker-compose exec docker-compose.yml文件中写的服务id /bin/bash
docker-compose ps # 展示当前docker-compose编排过的运行的所有容器
docker-compose top # 展示当前docker-compose编排过的容器进程
docker-compose logs yml里面的服务id # 查看容器输出日志
docker-compose config # 检查配置
docker-compose config -q # 检查配置,有问题才有输出
docker-compose restart # 重启服务
docker-compose start # 启动服务
docker-compose stop # 停止服务
11.7Compose编排微服务
改造9章节项目,增加redis和mysql
11.7.1新建数据库表
CREATE TABLE `user` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `username` varchar(50) NOT NULL DEFAULT '' COMMENT '用户名', `password` varchar(50) NOT NULL DEFAULT '' COMMENT '密码', `sex` tinyint(4) NOT NULL DEFAULT '0' COMMENT '性别 0=女 1=男 ', `deleted` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '删除标志,默认0不删除,1删除', `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间', `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COMMENT='用户表' |
11.7.2项目
见附件
11.7.3编写Dockerfile
# 基础镜像使用java FROM openjdk:8 # 作者 MAINTAINER xxzzyy #VOLUME 指定临时文件目录为/tmp,在主机/var/lib/docker目录下创建了一个临时文件并链接到容器的/tmp VOLUME /tmp #将jar包添加到容器中并更名为app.jar ADD boot-docker-demo-0.0.1-SNAPSHOT.jar app.jar #运行jar包 RUN bash -c 'touch /app.jar' ENTRYPOINT ["java","-jar","/app.jar"] #暴露6001端口作为微服务 EXPOSE 6001 |
11.7.4进入主机构建镜像
docker build -t xxzzyy_docker:1.6 . |
11.7.5不用Compose
单独的mysql容器实例
新建mysql容器实例
docker run -p 3306:3306 --name mysql57 --privileged=true -v /yzh/mysql/conf:/etc/mysql/conf.d -v /yzh/mysql/logs:/logs -v /yzh/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.7 |
进入mysql容器实例并新建库test+新建表user
docker exec -it mysql57 /bin/bash
mysql -uroot -p
create database test;
use test;
CREATE TABLE `user` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `username` varchar(50) NOT NULL DEFAULT '' COMMENT '用户名', `password` varchar(50) NOT NULL DEFAULT '' COMMENT '密码', `sex` tinyint(4) NOT NULL DEFAULT '0' COMMENT '性别 0=女 1=男 ', `deleted` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '删除标志,默认0不删除,1删除', `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间', `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COMMENT='用户表'; |
单独的redis容器实例
docker run -p 6379:6379 --name redis608 --privileged=true -v /app/redis/redis.conf:/etc/redis/redis.conf -v /app/redis/data:/data -d redis:6.0.8 redis-server /etc/redis/redis.conf |
微服务工程
docker run -d -p 6001:6001 xxzzyy_docker:1.6
三个容器实例依次顺序启动成功
docker ps
问题
先后顺序要求固定,先mysql+redis才能微服务访问成功
多个run命令......
容器间的启停或宕机,有可能导致IP地址对应的容器实例变化,映射出错, 要么生产IP写死(可以但是不推荐),要么通过服务调用
11.7.5使用Compose
编写docker-compose.yml文件
{ version: '3', services: { microService: { image: 'xxzzyy_docker:1.6', container_name: 'ms01', ports: [ '6001:6001' ], volumes: [ '/app/microService:/data' ], networks: [ 'yzh_net' ], depends_on: [ 'redis', 'mysql' ] }, redis: { image: 'redis:6.0.8', ports: [ '6379:6379' ], volumes: [ '/app/redis/redis.conf:/etc/redis/redis.conf', '/app/redis/data:/data' ], networks: [ 'yzh_net' ], command: 'redis-server /etc/redis/redis.conf' }, mysql: { image: 'mysql:5.7', environment: { MYSQL_ROOT_PASSWORD: '123456', MYSQL_ALLOW_EMPTY_PASSWORD: 'no', MYSQL_DATABASE: 'test', MYSQL_USER: 'root', MYSQL_PASSWORD: '123456' }, ports: [ '3306:3306' ], volumes: [ '/app/mysql/db:/var/lib/mysql', '/app/mysql/conf/my.cnf:/etc/my.cnf', '/app/mysql/init:/docker-entrypoint-initdb.d' ], networks: [ 'yzh_net' ], command: '--default-authentication-plugin=mysql_native_password' } }, networks: 'yzh_net' } |
创建网络
docker network create xxyyzz
第二次修改微服务工程
#spring.datasource.url=jdbc:mysql://192.168.1.132:3306/test?useUnicode=true&characterEncoding=utf-8&useSSL=false spring.datasource.url=jdbc:mysql://mysql:3306/test?useUnicode=true&characterEncoding=utf-8&useSSL=false #spring.redis.host=192.168.8.201 spring.redis.host=redis |
maven打包 上传host 制作镜像
docker build -t xxzzyy_docker:1.6 .
compose 编排执行
执行 docker-compose up 或者 执行 docker-compose up -d
进入mysql容器实例并新建库test+新建表user
docker exec -it mysql57 /bin/bash
mysql -uroot -p
create database test;
use test;
CREATE TABLE `user` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `username` varchar(50) NOT NULL DEFAULT '' COMMENT '用户名', `password` varchar(50) NOT NULL DEFAULT '' COMMENT '密码', `sex` tinyint(4) NOT NULL DEFAULT '0' COMMENT '性别 0=女 1=男 ', `deleted` tinyint(4) unsigned NOT NULL DEFAULT '0' COMMENT '删除标志,默认0不删除,1删除', `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间', `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COMMENT='用户表'; |
新增数据测试
http://主机ip:6001/
关停
docker-compose stop
12.Docker轻量级可视化工具Portainer
12.1是什么
Portainer 是一款轻量级的应用,它提供了图形化界面,用于方便地管理Docker环境,包括单机环境和集群环境。
12.2安装
官网
https://www.portainer.io/
https://docs.portainer.io/v/ce-2.9/start/install/server/docker/linux
步骤
docker命令安装
docker run -d -p 8000:8000 -p 9000:9000 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer |
第一次登录需创建admin,访问地址:xxx.xxx.xxx.xxx:9000
用户名,直接用默认admin
密码记得8位,随便你写
设置admin用户和密码后首次登陆
选择local选项卡后本地docker详细信息展示