🐳chore(工具): 调整目录名

develop
xiaojin 5 years ago
parent 1d9799a21e
commit bd8bef8849

@ -1,18 +1,44 @@
删除所有 容器和镜像
docker stop $(docker ps -a -q) && docker system prune --all --force
docker rm $(docker ps -a -q)
# A simple Loki setup with Grafana
配置中心文档
https://www.apolloconfig.com/#/zh/deployment/quick-start
This setup uses Loki for logs, Prometheus for metrics, and Jaeger for tracing. It will automatically collect logs from other Docker containers running on the same server if the installation steps below are followed.
redis 集群
https://github.com/Grokzen/docker-redis-cluster
The Livingdocs server can be configured to send metrics data to the stack, and there is also a way to send log data when running the service without the use of Docker (for local development purposes). More details can be found in the [Livingdocs documentation](https://docs.livingdocs.io/).
mongo 集群
https://github.com/senssei/mongo-cluster-docker
The stack includes the following:
本地 dns 系统:
https://github.com/mafintosh/dns-discovery
* **cAdvisor** [[Docs](https://github.com/google/cadvisor)] [[Local UI](http://localhost:9081/)] - Provides resource usage and performance metrics of Docker containers to Prometheus
* **Grafana** [[Docs](https://grafana.com/docs/grafana/latest/)] [[Local UI](http://localhost:3000/)] - UI to explore logs and metrics using queries, charts, and alerts
* **Jaeger** [[Docs](https://www.jaegertracing.io/docs/)] [[Local UI](http://localhost:16686/)] - Provides tracing data which is linked to each incoming Livingdocs server request
* **Loki** [[Docs](https://grafana.com/docs/loki/latest/)] - Ingests logs which can be viewed and queried from within Grafana
* **OpenTelemetry Collector** [[Docs](https://opentelemetry.io/docs/collector/)] - Collects metrics data from Livingdocs server and exports the data to Prometheus
* **Prometheus** [[Docs](https://prometheus.io/docs/)] [[Local UI](http://localhost:3001/)] - A monitoring toolkit for timeseries based metrics
* **Vector** [[Docs](https://vector.dev/docs/)] - Transforms Docker logs and send them to Loki, as well as collecting logs from local Node.js processes if required
基于 gitea + drone + docker 的 CI 流程实践
https://zhuanlan.zhihu.com/p/266072740
## Installation
- Clone the repository and launch cAdvisor, Grafana, Jaeger, Loki, OpenTelemetry Collector, Prometheus, and Vector:
```sh
git clone git@github.com:livingdocsIO/monitoring.git
cd monitoring
docker-compose up -d
```
- Navigate to <http://localhost:3000/> and log in using the default Grafana user (admin/admin).
## Troubleshooting
### Directory permissions
If you see any errors about being unable to create files or directories when starting the containers then the issue is probably related to user permissions. We have chosen to run the containers which write to disk with uid 1000 and gid 1000. We see this as being a sensible default for most users, but if it causes problems you can either change the `user: "1000:1000"` lines in docker-compose.yaml to values which suit your environment, or remove the lines and set the default container permissions on the directories:
```sh
chown -R 472:472 ./data/grafana
chown -R 0:0 ./data/jaeger
chown -R 10001:10001 ./data/loki
chown -R 65534:65534 ./data/prometheus
```
### Jaeger EMSGSIZE error
Jaeger needs higher UDP package sizes than the maximum configured on Mac. The default value should be increased using:
```sh
sudo sysctl net.inet.udp.maxdgram=65536
```

@ -1,12 +1,12 @@
version: "3"
networks:
monitoring:
environment:
services:
cadvisor:
image: google/cadvisor:v0.33.0
container_name: monitoring-cadvisor
container_name: environment-cadvisor
restart: unless-stopped
command: --docker_only=true --store_container_labels=false
ports: ["4100:8080"]
@ -16,29 +16,29 @@ services:
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
networks: [monitoring]
networks: [environment]
privileged: true
grafana:
image: grafana/grafana:7.3.6
container_name: monitoring-grafana
container_name: environment-grafana
restart: unless-stopped
ports: ["4101:3000"]
volumes:
- ./datasource.yaml:/etc/grafana/provisioning/datasources/monitoring.yaml
- ./dashboards:/etc/grafana/provisioning/dashboards
- ./data/grafana:/var/lib/grafana
networks: [monitoring]
networks: [environment]
user: "1000:1000"
jaeger:
image: jaegertracing/all-in-one:1.21.0
container_name: monitoring-jaeger
container_name: environment-jaeger
restart: unless-stopped
ports: ["6831:6831/udp", "6832:6832/udp", "14250:14250", "14269:14269", "4102:16686"]
volumes:
- ./data/jaeger:/badger
networks: [monitoring]
networks: [environment]
mem_limit: 512m
environment:
- SPAN_STORAGE_TYPE=badger
@ -50,71 +50,71 @@ services:
prometheus:
image: prom/prometheus:v2.24.1
container_name: monitoring-prometheus
container_name: environment-prometheus
restart: unless-stopped
command: --config.file=/etc/prometheus/prometheus.yaml --web.route-prefix=/ --storage.tsdb.path=/prometheus --storage.tsdb.retention.time=30d --web.enable-lifecycle --web.enable-admin-api
ports: ["4103:9090"]
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yaml
- ./data/prometheus:/prometheus
networks: [monitoring]
networks: [environment]
user: "1000:1000"
mem_limit: 512m
loki:
image: grafana/loki:2.1.0
container_name: monitoring-loki
container_name: environment-loki
restart: unless-stopped
ports: ["3100:3100"]
volumes:
- ./loki.yaml:/etc/loki/local-config.yaml
- /var/log:/var/log:ro
- ./data/loki:/loki
networks: [monitoring]
networks: [environment]
user: "1000:1000"
opentelemetry-collector:
image: otel/opentelemetry-collector:0.18.0
container_name: monitoring-opentelemetry-collector
container_name: environment-opentelemetry-collector
restart: unless-stopped
command: --config=/conf/otel-collector.config.yaml
ports: ["9464:9464", "55680:55680", "55681:55681"]
volumes:
- ./otel-collector.yaml:/conf/otel-collector.config.yaml
networks: [monitoring]
networks: [environment]
vector:
image: timberio/vector:0.11.1-alpine
container_name: monitoring-vector
container_name: environment-vector
restart: unless-stopped
ports: ["8383:8383", "8686:8686", "9160:9160", "4545:4545/udp"]
volumes:
- ./vector.toml:/etc/vector/vector.toml
- /var/run/docker.sock:/var/run/docker.sock:ro
networks: [monitoring]
networks: [environment]
depends_on: [loki, prometheus]
mem_limit: 100m
apollo:
image: nobodyiam/apollo-quick-start
container_name: monitoring-apollo
container_name: environment-apollo
depends_on:
- apollo-db
ports:
- "4104:8080"
- "4105:8090"
- "4106:8070"
networks: [monitoring]
networks: [environment]
links:
- apollo-db
apollo-db:
image: mysql:5.7
container_name: monitoring-apollo-db
container_name: environment-apollo-db
environment:
TZ: Asia/Shanghai
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
networks: [monitoring]
networks: [environment]
depends_on:
- apollo-dbdata
ports:
@ -126,7 +126,31 @@ services:
apollo-dbdata:
image: alpine:latest
container_name: monitoring-apollo-dbdata
networks: [monitoring]
container_name: environment-apollo-dbdata
networks: [environment]
volumes:
- /var/lib/mysql
mongodb:
image: docker.io/bitnami/mongodb:4.4
container_name: environment-mongodb
environment:
# - MONGODB_ENABLE_IPV6=yes
- ALLOW_EMPTY_PASSWORD=yes
ports:
- "4108:27017"
networks: [environment]
volumes:
- './data/mongodb:/bitnami/mongodb'
redis:
image: docker.io/bitnami/redis:6.2
container_name: environment-redis
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
# - REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '4109:6379'
volumes:
- './data/redis:/bitnami/redis/data'

@ -1,87 +0,0 @@
# mongo-cluster-docker
This is a simple 3 node replica mongodb setup based on official `mongo` docker image using `docker-compose` described in my blogpost at https://warzycha.pl/mongo-db-sharding-docker-example/.
For details description, steps and discussion go to:
1. https://warzycha.pl/mongo-db-sharding-docker-example/
2. https://warzycha.pl/mongo-db-shards-by-location/
# Run
```
docker-compose -f docker-compose.1.yml -f docker-compose.2.yml -f docker-compose.cnf.yml -f docker-compose.shard.yml up
```
# Tests
> Manually for the time being
0. Core tests
Basic *replica* test on *rs1* replica set (data nodes), `mongo-1-1`
```js
rs.status();
```
this should return in `members` 3 nodes.
Basic *sharding* test on *router* (mongos), `mongo-router`
```js
sh.status();
```
this should return something similar to:
```
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("587d306454828b89adaca524")
}
shards:
active mongoses:
"3.4.1" : 1
balancer:
Currently enabled: yes
Currently running: yes
Balancer lock taken at Mon Jan 16 2017 22:18:53 GMT+0100 by ConfigServer:Balancer
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
```
# Sharding configuration
Connect to 'mongos' router and run `queries/shard-status.js` for shard status.
To establish location based partitioning on it just run `queries/init.js`.
# Issues and limitations
It's sometimes stuck on 'mongo-router | 2017-01-16T21:29:48.573+0000 W NETWORK [replSetDistLockPinger] No primary detected for
set cnf-serv'. It's because quite random order in `docker-compose`.
My workaround was just to kill all containers related.
```
docker-compose -f docker-compose.1.yml -f docker-compose.2.yml -f docker-compose.cnf.yml -f docker-compose.shard.yml rm -f
```
Please pull request. :)
Basically `mongosetup` service is now splitted to multiple `yml` files. :)
# Reference
* http://www.sohamkamani.com/blog/2016/06/30/docker-mongo-replica-set/
* https://github.com/singram/mongo-docker-compose
* http://stackoverflow.com/questions/31138631/configuring-mongodb-replica-set-from-docker-compose
* https://gist.github.com/garycrawford/0a45f820e146917d231d
* http://stackoverflow.com/questions/31746182/docker-compose-wait-for-container-x-before-starting-y
* https://docs.docker.com/compose/startup-order/
* http://stackoverflow.com/questions/31138631/configuring-mongodb-replica-set-from-docker-compose
* https://github.com/soldotno/elastic-mongo/blob/master/docker-compose.yml

@ -1,50 +0,0 @@
version: '3'
services:
mongo-1-2:
container_name: "mongo-1-2"
image: bitnami/mongodb:latest
ports:
- "30012:27017"
command: mongod --replSet rs1 --shardsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
restart: always
mongo-1-3:
container_name: "mongo-1-3"
image: bitnami/mongodb:latest
ports:
- "30013:27017"
command: mongod --replSet rs1 --shardsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
restart: always
mongo-1-1:
container_name: "mongo-1-1"
image: bitnami/mongodb:latest
ports:
- "30011:27017"
command: mongod --replSet rs1 --shardsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
links:
- mongo-1-2:mongo-1-2
- mongo-1-3:mongo-1-3
restart: always
mongo-rs1-setup:
container_name: "mongo-rs1-setup"
image: bitnami/mongodb:latest
depends_on:
- "mongo-1-1"
- "mongo-1-2"
- "mongo-1-3"
links:
- mongo-1-1:mongo-1-1
- mongo-1-2:mongo-1-2
- mongo-1-3:mongo-1-3
volumes:
- ./scripts:/scripts
environment:
- MONGO1=mongo-1-1
- MONGO2=mongo-1-2
- MONGO3=mongo-1-3
- RS=rs1
entrypoint: [ "/scripts/setup.sh" ]

@ -1,50 +0,0 @@
version: '3'
services:
mongo-2-2:
container_name: "mongo-2-2"
image: bitnami/mongodb:latest
ports:
- "30022:27017"
command: mongod --replSet rs2 --shardsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
restart: always
mongo-2-3:
container_name: "mongo-2-3"
image: bitnami/mongodb:latest
ports:
- "30023:27017"
command: mongod --replSet rs2 --shardsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
restart: always
mongo-2-1:
container_name: "mongo-2-1"
image: bitnami/mongodb:latest
ports:
- "30021:27017"
command: mongod --replSet rs2 --shardsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
links:
- mongo-2-2:mongo-2-2
- mongo-2-3:mongo-2-3
restart: always
mongo-rs2-setup:
container_name: "mongo-rs2-setup"
image: bitnami/mongodb:latest
depends_on:
- "mongo-2-1"
- "mongo-2-2"
- "mongo-2-3"
links:
- mongo-2-1:mongo-2-1
- mongo-2-2:mongo-2-2
- mongo-2-3:mongo-2-3
volumes:
- ./scripts:/scripts
environment:
- MONGO1=mongo-2-1
- MONGO2=mongo-2-2
- MONGO3=mongo-2-3
- RS=rs2
entrypoint: [ "/scripts/setup.sh" ]

@ -1,49 +0,0 @@
version: '3'
services:
mongo-cnf-2:
container_name: "mongo-cnf-2"
image: bitnami/mongodb:latest
ports:
- "30102:27017"
command: mongod --replSet cnf-serv --configsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
restart: always
mongo-cnf-3:
container_name: "mongo-cnf-3"
image: bitnami/mongodb:latest
ports:
- "30103:27017"
command: mongod --replSet cnf-serv --configsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
restart: always
mongo-cnf-1:
container_name: "mongo-cnf-1"
image: bitnami/mongodb:latest
ports:
- "30101:27017"
command: mongod --replSet cnf-serv --configsvr --port 27017 --oplogSize 16 --noprealloc --smallfiles
links:
- mongo-cnf-2:mongo-cnf-2
- mongo-cnf-3:mongo-cnf-3
restart: always
mongo-cnf-setup:
container_name: "mongo-cnf-setup"
image: bitnami/mongodb:latest
depends_on:
- "mongo-cnf-1"
- "mongo-cnf-2"
- "mongo-cnf-3"
links:
- mongo-cnf-1:mongo-cnf-1
- mongo-cnf-2:mongo-cnf-2
- mongo-cnf-3:mongo-cnf-3
volumes:
- ./scripts:/scripts
environment:
- MONGO1=mongo-cnf-1
- MONGO2=mongo-cnf-2
- MONGO3=mongo-cnf-3
- RS=cnf-serv
- PORT=27017
entrypoint: [ "/scripts/setup-cnf.sh" ]

@ -1,49 +0,0 @@
version: '3'
services:
mongo-router:
container_name: "mongo-router"
image: bitnami/mongodb:latest
ports:
- "30001:27017"
depends_on:
- "mongo-rs1-setup"
- "mongo-rs2-setup"
- "mongo-cnf-setup"
external_links:
- mongo-cnf-1:mongo-cnf-1
- mongo-cnf-2:mongo-cnf-2
- mongo-cnf-3:mongo-cnf-3
- mongo-1-1:mongo-1-1
- mongo-1-2:mongo-1-2
- mongo-1-3:mongo-1-3
- mongo-2-1:mongo-2-1
- mongo-2-2:mongo-2-2
- mongo-2-3:mongo-2-3
command: mongos --configdb cnf-serv/mongo-cnf-1:27017,mongo-cnf-2:27017,mongo-cnf-3:27017 --port 27017 --bind_ip 0.0.0.0
restart: always
mongo-shard-setup:
container_name: "mongo-shard-setup"
image: bitnami/mongodb:latest
depends_on:
- "mongo-router"
links:
- mongo-router:mongo-router
volumes:
- ./scripts:/scripts
environment:
- MONGOS=mongo-router
- MONGO11=mongo-1-1
- MONGO12=mongo-1-2
- MONGO13=mongo-1-3
- MONGO21=mongo-2-1
- MONGO22=mongo-2-2
- MONGO23=mongo-2-3
- RS1=rs1
- RS2=rs2
- PORT=27017
- PORT1=27017
- PORT2=27017
- PORT3=27017
entrypoint: [ "/scripts/init-shard.sh" ]
restart: on-failure:20

@ -1,54 +0,0 @@
sh.removeShardTag("rs1", "US");
sh.removeShardTag("rs2", "EU");
sh.addShardTag("rs1", "US");
sh.addShardTag("rs2", "EU");
sh.disableBalancing("test.sample");
db.sample.drop();
db.createCollection("sample");
db.sample.createIndex( { factoryId: 1 } );
sh.enableSharding("test");
sh.shardCollection("test.sample",{ location: 1, factoryId: 1});
sh.addTagRange(
"test.sample",
{ "location" : "US", "factoryId" : MinKey },
{ "location" : "US", "factoryId" : MaxKey },
"US"
);
sh.addTagRange(
"test.sample",
{ "location" : "EU", "factoryId" : MinKey },
{ "location" : "EU", "factoryId" : MaxKey },
"EU"
);
sh.enableBalancing("test.sample");
for(var i=0; i<100; i++){
db.sample.insert({
"location": "US",
"factoryId": NumberInt(i)
});
db.sample.insert({
"location": "EU",
"factoryId": NumberInt(100+i)
});
}
sh.startBalancer();
db.sample.find();

@ -1 +0,0 @@
* text=auto eol=lf

@ -1,32 +0,0 @@
#!/bin/bash
mongodb1=`getent hosts ${MONGOS} | awk '{ print $1 }'`
mongodb11=`getent hosts ${MONGO11} | awk '{ print $1 }'`
mongodb12=`getent hosts ${MONGO12} | awk '{ print $1 }'`
mongodb13=`getent hosts ${MONGO13} | awk '{ print $1 }'`
mongodb21=`getent hosts ${MONGO21} | awk '{ print $1 }'`
mongodb22=`getent hosts ${MONGO22} | awk '{ print $1 }'`
mongodb23=`getent hosts ${MONGO23} | awk '{ print $1 }'`
mongodb31=`getent hosts ${MONGO31} | awk '{ print $1 }'`
mongodb32=`getent hosts ${MONGO32} | awk '{ print $1 }'`
mongodb33=`getent hosts ${MONGO33} | awk '{ print $1 }'`
port=${PORT:-27017}
echo "Waiting for startup.."
until mongo --host ${mongodb1}:${port} --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' &>/dev/null; do
printf '.'
sleep 1
done
echo "Started.."
echo init-shard.sh time now: `date +"%T" `
mongo --host ${mongodb1}:${port} <<EOF
sh.addShard( "${RS1}/${mongodb11}:${PORT1},${mongodb12}:${PORT2},${mongodb13}:${PORT3}" );
sh.addShard( "${RS2}/${mongodb21}:${PORT1},${mongodb22}:${PORT2},${mongodb23}:${PORT3}" );
sh.status();
EOF

@ -1,40 +0,0 @@
#!/bin/bash
mongodb1=`getent hosts ${MONGO1} | awk '{ print $1 }'`
mongodb2=`getent hosts ${MONGO2} | awk '{ print $1 }'`
mongodb3=`getent hosts ${MONGO3} | awk '{ print $1 }'`
port=${PORT:-27017}
echo "Waiting for startup.."
until mongo --host ${mongodb1}:${port} --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' &>/dev/null; do
printf '.'
sleep 1
done
echo "Started.."
echo setup-cnf.sh time now: `date +"%T" `
mongo --host ${mongodb1}:${port} <<EOF
var cfg = {
"_id": "${RS}",
"configsvr": true,
"protocolVersion": 1,
"members": [
{
"_id": 100,
"host": "${mongodb1}:${port}"
},
{
"_id": 101,
"host": "${mongodb2}:${port}"
},
{
"_id": 102,
"host": "${mongodb3}:${port}"
}
]
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
EOF

@ -1,39 +0,0 @@
#!/bin/bash
mongodb1=`getent hosts ${MONGO1} | awk '{ print $1 }'`
mongodb2=`getent hosts ${MONGO2} | awk '{ print $1 }'`
mongodb3=`getent hosts ${MONGO3} | awk '{ print $1 }'`
port=${PORT:-27017}
echo "Waiting for startup.."
until mongo --host ${mongodb1}:${port} --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' &>/dev/null; do
printf '.'
sleep 1
done
echo "Started.."
echo setup.sh time now: `date +"%T" `
mongo --host ${mongodb1}:${port} <<EOF
var cfg = {
"_id": "${RS}",
"protocolVersion": 1,
"members": [
{
"_id": 0,
"host": "${mongodb1}:${port}"
},
{
"_id": 1,
"host": "${mongodb2}:${port}"
},
{
"_id": 2,
"host": "${mongodb3}:${port}"
}
]
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
EOF

@ -1,44 +0,0 @@
# A simple Loki setup with Grafana
This setup uses Loki for logs, Prometheus for metrics, and Jaeger for tracing. It will automatically collect logs from other Docker containers running on the same server if the installation steps below are followed.
The Livingdocs server can be configured to send metrics data to the stack, and there is also a way to send log data when running the service without the use of Docker (for local development purposes). More details can be found in the [Livingdocs documentation](https://docs.livingdocs.io/).
The stack includes the following:
* **cAdvisor** [[Docs](https://github.com/google/cadvisor)] [[Local UI](http://localhost:9081/)] - Provides resource usage and performance metrics of Docker containers to Prometheus
* **Grafana** [[Docs](https://grafana.com/docs/grafana/latest/)] [[Local UI](http://localhost:3000/)] - UI to explore logs and metrics using queries, charts, and alerts
* **Jaeger** [[Docs](https://www.jaegertracing.io/docs/)] [[Local UI](http://localhost:16686/)] - Provides tracing data which is linked to each incoming Livingdocs server request
* **Loki** [[Docs](https://grafana.com/docs/loki/latest/)] - Ingests logs which can be viewed and queried from within Grafana
* **OpenTelemetry Collector** [[Docs](https://opentelemetry.io/docs/collector/)] - Collects metrics data from Livingdocs server and exports the data to Prometheus
* **Prometheus** [[Docs](https://prometheus.io/docs/)] [[Local UI](http://localhost:3001/)] - A monitoring toolkit for timeseries based metrics
* **Vector** [[Docs](https://vector.dev/docs/)] - Transforms Docker logs and send them to Loki, as well as collecting logs from local Node.js processes if required
## Installation
- Clone the repository and launch cAdvisor, Grafana, Jaeger, Loki, OpenTelemetry Collector, Prometheus, and Vector:
```sh
git clone git@github.com:livingdocsIO/monitoring.git
cd monitoring
docker-compose up -d
```
- Navigate to <http://localhost:3000/> and log in using the default Grafana user (admin/admin).
## Troubleshooting
### Directory permissions
If you see any errors about being unable to create files or directories when starting the containers then the issue is probably related to user permissions. We have chosen to run the containers which write to disk with uid 1000 and gid 1000. We see this as being a sensible default for most users, but if it causes problems you can either change the `user: "1000:1000"` lines in docker-compose.yaml to values which suit your environment, or remove the lines and set the default container permissions on the directories:
```sh
chown -R 472:472 ./data/grafana
chown -R 0:0 ./data/jaeger
chown -R 10001:10001 ./data/loki
chown -R 65534:65534 ./data/prometheus
```
### Jaeger EMSGSIZE error
Jaeger needs higher UDP package sizes than the maximum configured on Mac. The default value should be increased using:
```sh
sudo sysctl net.inet.udp.maxdgram=65536
```

@ -1,52 +0,0 @@
# Build based on redis:6.0 from 2020-05-05
FROM redis@sha256:f7ee67d8d9050357a6ea362e2a7e8b65a6823d9b612bc430d057416788ef6df9
LABEL maintainer="Johan Andersson <Grokzen@gmail.com>"
# Some Environment Variables
ENV HOME /root
ENV DEBIAN_FRONTEND noninteractive
# Install system dependencies
RUN apt-get update -qq && \
apt-get install --no-install-recommends -yqq \
net-tools supervisor ruby rubygems locales gettext-base wget gcc make g++ build-essential libc6-dev tcl && \
apt-get clean -yqq
# # Ensure UTF-8 lang and locale
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
# Necessary for gem installs due to SHA1 being weak and old cert being revoked
ENV SSL_CERT_FILE=/usr/local/etc/openssl/cert.pem
RUN gem install redis -v 4.1.3
# This will always build the latest release/commit in the 6.0 branch
ARG redis_version=6.2
RUN wget -qO redis.tar.gz https://github.com/redis/redis/tarball/${redis_version} \
&& tar xfz redis.tar.gz -C / \
&& mv /redis-* /redis
RUN (cd /redis && make)
RUN mkdir /redis-conf && mkdir /redis-data
COPY redis-cluster.tmpl /redis-conf/redis-cluster.tmpl
COPY redis.tmpl /redis-conf/redis.tmpl
COPY sentinel.tmpl /redis-conf/sentinel.tmpl
# Add startup script
COPY docker-entrypoint.sh /docker-entrypoint.sh
# Add script that generates supervisor conf file based on environment variables
COPY generate-supervisor-conf.sh /generate-supervisor-conf.sh
RUN chmod 755 /docker-entrypoint.sh
EXPOSE 7000 7001 7002 7003 7004 7005 7006 7007 5000 5001 5002
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["redis-cluster"]

@ -1,26 +0,0 @@
help:
@echo "Please use 'make <target>' where <target> is one of"
@echo " build builds docker-compose containers"
@echo " up starts docker-compose containers"
@echo " down stops the running docker-compose containers"
@echo " rebuild rebuilds the image from scratch without using any cached layers"
@echo " bash starts bash inside a running container."
@echo " cli run redis-cli inside the container on the server with port 7000"
build:
docker-compose build
up:
docker-compose up -d
down:
docker-compose stop
rebuild:
docker-compose build --no-cache
bash:
docker-compose exec redis-cluster /bin/bash
cli:
docker-compose exec redis-cluster /redis/src/redis-cli -p 7000

@ -1,286 +0,0 @@
# docker-redis-cluster
[![Docker Stars](https://img.shields.io/docker/stars/grokzen/redis-cluster.svg)](https://hub.docker.com/r/grokzen/redis-cluster/)
[![Docker Pulls](https://img.shields.io/docker/pulls/grokzen/redis-cluster.svg)](https://hub.docker.com/r/grokzen/redis-cluster/)
[![Build Status](https://travis-ci.org/Grokzen/docker-redis-cluster.svg?branch=master)](https://travis-ci.org/Grokzen/docker-redis-cluster)
Docker image with redis built and installed from source and a cluster is built.
To find all redis-server releases see them here https://github.com/antirez/redis/releases
## Discussions, help, guides
Github have recently released their `Discussions` feature into beta for more repositories across the github space. This feature is enabled on this repo since a while back.
Becuase we now have this feature, the issues feature will NOT be a place where you can now ask general questions or need simple help with this repo and what it provides.
What can you expect to find in there?
- A place where you can freely ask any question regarding this repo.
- Ask questions like `how do i do X?`
- General help with problems with this repo
- Guides written by me or any other contributer with useful examples and ansers to commonly asked questions and how to resolve thos problems.
- Approved answers to questions marked and promoted by me if help is provided by the community regarding some questions
## What this repo and container IS
This repo exists as a resource to make it quick and simple to get a redis cluster up and running with no fuzz or issues with mininal effort. The primary use for this container is to get a cluster up and running in no time that you can use for demo/presentation/development. It is not intended or built for anything else.
I also aim to have every single release of redis that supports a cluster available for use so you can run the exact version you want.
I personally use this to develop redis cluster client code https://github.com/Grokzen/redis-py-cluster
## What this repo and container IS NOT
This container that i have built is not supposed to be some kind of production container or one that is used within any environment other then running locally on your machine. It is not ment to be run on kubernetes or in any other prod/stage/test/dev environment as a fully working commponent in that environment. If that works for you and your use-case then awesome. But this container will not change to fit any other primary solution then to be used locally on your machine.
If you are looking for something else or some production quality or kubernetes compatible solution then you are looking in the wrong repo. There is other projects or forks of this repo that is compatible for that situation/solution.
For all other purposes other then what has been stated you are free to fork and/or rebuild this container using it as a template for what you need.
## Redis major version support and docker.hub availability
Starting from `2020-04-01` this repo will only support and make available on docker.hub all minor versions in the latest 3 major versions of redis-server software. At this date the tags on docker.hub for major versions 3.0, 3.2 & 4.0 will be removed and only 5.0, 6.0 & 6.2 will be available to download. This do not mean that you will not be able to build your desired version from this repo but there is no guarantees or support or hacks that will support this out of the box.
Moving forward when a new major release is shipped out, at the first minor release X.Y.1 version of the next major release, all tags from the last supported major version will be removed from docker.hub. This will give some time for the community to adapt and move forward in the versions before the older major version is removed from docker.hub.
This major version schema support follows the same major version support that redis itself use.
## Redis instances inside the container
The cluster is 6 redis instances running with 3 master & 3 slaves, one slave for each master. They run on ports 7000 to 7005.
If the flag `-e "SENTINEL=true"` is passed there are 3 Sentinel nodes running on ports 5000 to 5002 matching cluster's master instances.
This image requires at least `Docker` version 1.10 but the latest version is recommended.
# Important for Mac users
If you are using this container to run a redis cluster on your mac computer, then you need to configure the container to use another IP address for cluster discovery as it can't use the default discovery IP that is hardcoded into the container.
If you are using the docker-compose file to build the container, then you must export a environment variable on your machine before building the container.
```
# This will make redis do cluster discovery and bind all nodes to ip 127.0.0.1 internally
export REDIS_CLUSTER_IP=0.0.0.0
```
If you are downloading the container from dockerhub, you must add the internal IP environment variable to your `docker run` command.
```
docker run -e "IP=0.0.0.0" -p 7000-7005:7000-7005 grokzen/redis-cluster:latest
```
# Usage
This git repo is using `pyinvoke` to pull, build, push docker images. You can use it to build your own images if you like.
The invoke scripts in this repo is written only for python 3.7 and above
Install `pyinvoke` with `pip install invoke`.
This script will run `N num of cpu - 1` parralell tasks based on your version input.
To see available commands run `invoke -l` in the root folder of this repo. Example
```
(tmp-615229a94c330b9) ➜ docker-redis-cluster git:(pyinvoke) ✗ invoke -l
"Configured multiprocess pool size: 3
Available tasks:
build
pull
push
```
Each command is only taking one required positional argument `version`. Example:
```
(tmp-615229a94c330b9) ➜ docker-redis-cluster git:(pyinvoke) ✗ invoke build 6.0
...
```
and it will run the build step on all versions that starts with 6.0.
The only other optional usefull argument is `--cpu=N` and it will set how many paralell processes will be used. By default you will use n - 1 number of cpu cores that is available on your system. Commands like pull and push aare not very cpu intensive so using a higher number here might speed things up if you have good network bandwidth.
## Makefile (legacy)
Makefile still has a few docker-compose commands that can be used
To build your own image run:
make build
To run the container run:
make up
To stop the container run:
make down
To connect to your cluster you can use the redis-cli tool:
redis-cli -c -p 7000
Or the built redis-cli tool inside the container that will connect to the cluster inside the container
make cli
## Include sentinel instances
Sentinel instances is not enabled by default.
If running with plain docker send in `-e SENTINEL=true`.
When running with docker-compose set the environment variable on your system `REDIS_USE_SENTINEL=true` and start your container.
version: '2'
services:
redis-cluster:
...
environment:
SENTINEL: 'true'
## Change number of nodes
Be default, it is going to launch 3 masters with 1 slave per master. This is configurable through a number of environment variables:
| Environment variable | Default |
| -------------------- |--------:|
| `INITIAL_PORT` | 7000 |
| `MASTERS` | 3 |
| `SLAVES_PER_MASTER` | 1 |
Therefore, the total number of nodes (`NODES`) is going to be `$MASTERS * ( $SLAVES_PER_MASTER + 1 )` and ports are going to range from `$INITIAL_PORT` to `$INITIAL_PORT + NODES - 1`.
At the docker-compose provided by this repository, ports 7000-7050 are already mapped to the hosts'. Either if you need more than 50 nodes in total or if you need to change the initial port number, you should override those values.
Also note that the number of sentinels (if enabled) is the same as the number of masters. The docker-compose file already maps ports 5000-5010 by default. You should also override those values if you have more than 10 masters.
version: '2'
services:
redis-cluster:
...
environment:
INITIAL_PORT: 9000,
MASTERS: 2,
SLAVES_PER_MASTER: 2
## IPv6 support
By default, redis instances will bind and accept requests from any IPv4 network.
This is configurable by an environment variable that specifies which address a redis instance will bind to.
By using the IPv6 variant `::` as counterpart to IPv4s `0.0.0.0` an IPv6 cluster can be created.
| Environment variable | Default |
| -------------------- | ------: |
| `BIND_ADDRESS` | 0.0.0.0 |
Note that Docker also needs to be [configured](https://docs.docker.com/config/daemon/ipv6/) for IPv6 support.
Unfortunately Docker does not handle IPv6 NAT so, when acceptable, `--network host` can be used.
# Example using plain docker
docker run -e "IP=::1" -e "BIND_ADDRESS=::" --network host grokzen/redis-cluster:latest
## Build alternative redis versions
For a release to be buildable it needs to be present at this url: http://download.redis.io/releases/
### docker build
To build a different redis version use the argument `--build-arg` argument.
# Example plain docker
docker build --build-arg redis_version=6.0.11 -t grokzen/redis-cluster .
### docker-compose
To build a different redis version use the `--build-arg` argument.
# Example docker-compose
docker-compose build --build-arg "redis_version=6.0.11" redis-cluster
# Available tags
The following tags with pre-built images is available on `docker-hub`.
Latest release in the most recent stable branch will be used as `latest` version.
- latest == 6.2.1
Redis 6.2.x versions:
- 6.2.1
- 6.2.0
- 6.2-rc2
- 6.2-rc1
Redis 6.0.x versions:
- 6.0.12
- 6.0.11
- 6.0.10
- 6.0.9
- 6.0.8
- 6.0.7
- 6.0.6
- 6.0.5
- 6.0.4
- 6.0.3
- 6.0.2
- 6.0.1
- 6.0.0
Redis 5.0.x version:
- 5.0.12
- 5.0.11
- 5.0.10
- 5.0.9
- 5.0.8
- 5.0.7
- 5.0.6
- 5.0.5
- 5.0.4
- 5.0.3
- 5.0.2
- 5.0.1
- 5.0.0
## Unavailable major versions
The following major versions is no longer available to be downloaded from docker.hub. You can still build and run them directly from this repo.
- 4.0
- 3.2
- 3.0
# License
This repo is using the MIT LICENSE.
You can find it in the file [LICENSE](LICENSE)

@ -1,15 +0,0 @@
version: '2'
services:
redis-cluster:
environment:
IP: ${REDIS_CLUSTER_IP}
SENTINEL: ${REDIS_USE_SENTINEL}
STANDALONE: ${REDIS_USE_STANDALONE}
build:
context: .
args:
redis_version: '6.2.1'
hostname: server
ports:
- '7000-7050:7000-7050'
- '5000-5010:5000-5010'

@ -1,103 +0,0 @@
#!/bin/sh
if [ "$1" = 'redis-cluster' ]; then
# Allow passing in cluster IP by argument or environmental variable
IP="${2:-$IP}"
if [ -z "$IP" ]; then # If IP is unset then discover it
IP=$(hostname -I)
fi
echo " -- IP Before trim: '$IP'"
IP=$(echo ${IP}) # trim whitespaces
echo " -- IP Before split: '$IP'"
IP=${IP%% *} # use the first ip
echo " -- IP After trim: '$IP'"
if [ -z "$INITIAL_PORT" ]; then # Default to port 7000
INITIAL_PORT=7000
fi
if [ -z "$MASTERS" ]; then # Default to 3 masters
MASTERS=3
fi
if [ -z "$SLAVES_PER_MASTER" ]; then # Default to 1 slave for each master
SLAVES_PER_MASTER=1
fi
if [ -z "$BIND_ADDRESS" ]; then # Default to any IPv4 address
BIND_ADDRESS=0.0.0.0
fi
max_port=$(($INITIAL_PORT + $MASTERS * ( $SLAVES_PER_MASTER + 1 ) - 1))
first_standalone=$(($max_port + 1))
if [ "$STANDALONE" = "true" ]; then
STANDALONE=2
fi
if [ ! -z "$STANDALONE" ]; then
max_port=$(($max_port + $STANDALONE))
fi
for port in $(seq $INITIAL_PORT $max_port); do
mkdir -p /redis-conf/${port}
mkdir -p /redis-data/${port}
if [ -e /redis-data/${port}/nodes.conf ]; then
rm /redis-data/${port}/nodes.conf
fi
if [ -e /redis-data/${port}/dump.rdb ]; then
rm /redis-data/${port}/dump.rdb
fi
if [ -e /redis-data/${port}/appendonly.aof ]; then
rm /redis-data/${port}/appendonly.aof
fi
if [ "$port" -lt "$first_standalone" ]; then
PORT=${port} BIND_ADDRESS=${BIND_ADDRESS} envsubst < /redis-conf/redis-cluster.tmpl > /redis-conf/${port}/redis.conf
nodes="$nodes $IP:$port"
else
PORT=${port} BIND_ADDRESS=${BIND_ADDRESS} envsubst < /redis-conf/redis.tmpl > /redis-conf/${port}/redis.conf
fi
if [ "$port" -lt $(($INITIAL_PORT + $MASTERS)) ]; then
if [ "$SENTINEL" = "true" ]; then
PORT=${port} SENTINEL_PORT=$((port - 2000)) envsubst < /redis-conf/sentinel.tmpl > /redis-conf/sentinel-${port}.conf
cat /redis-conf/sentinel-${port}.conf
fi
fi
done
bash /generate-supervisor-conf.sh $INITIAL_PORT $max_port > /etc/supervisor/supervisord.conf
supervisord -c /etc/supervisor/supervisord.conf
sleep 3
#
## Check the version of redis-cli and if we run on a redis server below 5.0
## If it is below 5.0 then we use the redis-trib.rb to build the cluster
#
/redis/src/redis-cli --version | grep -E "redis-cli 3.0|redis-cli 3.2|redis-cli 4.0"
if [ $? -eq 0 ]
then
echo "Using old redis-trib.rb to create the cluster"
echo "yes" | eval ruby /redis/src/redis-trib.rb create --replicas "$SLAVES_PER_MASTER" "$nodes"
else
echo "Using redis-cli to create the cluster"
echo "yes" | eval /redis/src/redis-cli --cluster create --cluster-replicas "$SLAVES_PER_MASTER" "$nodes"
fi
if [ "$SENTINEL" = "true" ]; then
for port in $(seq $INITIAL_PORT $(($INITIAL_PORT + $MASTERS))); do
redis-sentinel /redis-conf/sentinel-${port}.conf &
done
fi
tail -f /var/log/supervisor/redis*.log
else
exec "$@"
fi

@ -1,47 +0,0 @@
initial_port="$1"
max_port="$2"
program_entry_template ()
{
local count=$1
local port=$2
echo "
[program:redis-$count]
command=/redis/src/redis-server /redis-conf/$port/redis.conf
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
"
}
result_str="
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/supervisord.log ; supervisord log file
logfile_maxbytes=50MB ; maximum size of logfile before rotation
logfile_backups=10 ; number of backed up logfiles
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
"
count=1
for port in `seq $initial_port $max_port`; do
result_str="$result_str$(program_entry_template $count $port)"
count=$((count + 1))
done
echo "$result_str"

@ -1,7 +0,0 @@
bind ${BIND_ADDRESS}
port ${PORT}
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
dir /redis-data/${PORT}

@ -1,4 +0,0 @@
bind ${BIND_ADDRESS}
port ${PORT}
appendonly yes
dir /redis-data/${PORT}

@ -1,5 +0,0 @@
port ${SENTINEL_PORT}
sentinel monitor sentinel${PORT} 127.0.0.1 ${PORT} 2
sentinel down-after-milliseconds sentinel${PORT} 5000
sentinel failover-timeout sentinel${PORT} 60000
sentinel parallel-syncs sentinel${PORT} 1

@ -1,123 +0,0 @@
import multiprocessing
from multiprocessing import Pool
from invoke import task
latest_version_string = "6.2.1"
version_config_mapping = []
version_config_mapping += [f"3.0.{i}" for i in range(0, 8)]
version_config_mapping += [f"3.2.{i}" for i in range(0, 14)]
version_config_mapping += [f"4.0.{i}" for i in range(0, 15)]
version_config_mapping += [f"5.0.{i}" for i in range(0, 13)]
version_config_mapping += [f"6.0.{i}" for i in range(0, 13)]
version_config_mapping += [f"6.2-rc{i}" for i in range(1, 3)]
version_config_mapping += [f"6.2.{i}" for i in range(0, 2)]
def version_name_to_version(version):
"""
Helper method that returns correct versions if you specify either
- all
- latest
or it will filter your chosen version based on what you inputed as version argument
"""
if version == "all":
return version_config_mapping
elif version == "latest":
return [latest_version_string]
else:
return filter_versions(version)
def get_pool_size(cpu_from_cli):
if cpu_from_cli:
pool_size = int(cpu_from_cli)
else:
pool_size = multiprocessing.cpu_count() - 1
print(f"Configured multiprocess pool size: {pool_size}")
return pool_size
def filter_versions(desired_version):
result = []
for version in version_config_mapping:
if version.startswith(desired_version):
result.append(version)
return result
def _docker_pull(config):
"""
Internal multiprocess method to run docker pull command
"""
c, version = config
print(f" -- Starting docker pull for version : {version}")
pull_command = f"docker pull grokzen/redis-cluster:{version}"
c.run(pull_command)
def _docker_build(config):
"""
Internal multiprocess method to run docker build command
"""
c, version = config
print(f" -- Starting docker build for version : {version}")
build_command = f"docker build --build-arg redis_version={version} -t grokzen/redis-cluster:{version} ."
c.run(build_command)
def _docker_push(config):
"""
Internal multiprocess method to run docker push command
"""
c, version = config
print(f" -- Starting docker push for version : {version}")
build_command = f"docker push grokzen/redis-cluster:{version}"
c.run(build_command)
@task
def pull(c, version, cpu=None):
print(f" -- Docker pull version docker-hub : {version}")
pool = Pool(get_pool_size(cpu))
pool.map(
_docker_pull,
[
[c, version]
for version in version_name_to_version(version)
],
)
@task
def build(c, version, cpu=None):
print(f" -- Docker building version : {version}")
pool = Pool(get_pool_size(cpu))
pool.map(
_docker_build,
[
[c, version]
for version in version_name_to_version(version)
],
)
@task
def push(c, version, cpu=None):
print(f" -- Docker push version to docker-hub : {version}")
pool = Pool(get_pool_size(cpu))
pool.map(
_docker_push,
[
[c, version]
for version in version_name_to_version(version)
],
)

@ -1,3 +1,20 @@
删除所有 容器和镜像
docker stop $(docker ps -a -q) && docker system prune --all --force
docker rm $(docker ps -a -q)
配置中心文档
https://www.apolloconfig.com/#/zh/deployment/quick-start
redis 集群
https://github.com/bitnami/bitnami-docker-redis-cluster
mongo 集群
https://github.com/bitnami/bitnami-docker-mongodb-sharded
基于 gitea + drone + docker 的 CI 流程实践
https://zhuanlan.zhihu.com/p/266072740
代码安全指南:
https://github.com/Tencent/secguide

Loading…
Cancel
Save