🐳chore(工具):去掉 tidb
parent
62e5c0215d
commit
e42df0197b
@ -1,14 +0,0 @@
|
||||
/values.yaml
|
||||
/generated-docker-compose.yml
|
||||
/data
|
||||
/logs
|
||||
/pd/bin
|
||||
/tikv/bin
|
||||
/tidb/bin
|
||||
/tidb-vision/tidb-vision
|
||||
/tmp
|
||||
/docker/dashboard_installer/dashboard/overview.json
|
||||
/docker/dashboard_installer/dashboard/pd.json
|
||||
/docker/dashboard_installer/dashboard/tidb.json
|
||||
/docker/dashboard_installer/dashboard/tikv.json
|
||||
/.idea
|
||||
@ -1,24 +0,0 @@
|
||||
sudo: required
|
||||
|
||||
services:
|
||||
- docker
|
||||
|
||||
before_install:
|
||||
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
|
||||
- sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
|
||||
- sudo apt-get update
|
||||
- sudo apt-get -y install docker-ce # update docker version
|
||||
- sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
|
||||
- docker -v
|
||||
- docker-compose -v
|
||||
- docker-compose up -d
|
||||
- sleep 10 # wait all components get ready
|
||||
- docker-compose ps
|
||||
- docker images
|
||||
- docker network ls
|
||||
- docker-compose logs
|
||||
|
||||
script:
|
||||
- docker ps -a --format="{{.Names}} {{.Image}} {{.Status}}" | grep -v 'Up' | grep -v 'Exited (0)' | awk '{print} END {if (NR>0) {exit 1;}}'
|
||||
- docker-compose -f docker-compose-test.yml run --rm tispark-tests bash /opt/tispark-tests/tests/loaddata.sh # add some data for tests
|
||||
# - docker-compose -f docker-compose-test.yml run --rm tispark-tests /opt/spark/bin/spark-submit /opt/spark/tests/tests.py # run tispark tests
|
||||
@ -1,277 +0,0 @@
|
||||
# TiDB docker-compose
|
||||
|
||||
[](https://travis-ci.org/pingcap/tidb-docker-compose)
|
||||
|
||||
**WARNING: This is for testing only, DO NOT USE IN PRODUCTION!**
|
||||
|
||||
## Requirements
|
||||
|
||||
* Docker >= 17.03
|
||||
* Docker Compose >= 1.6.0
|
||||
|
||||
> **Note:** [Legacy Docker Toolbox](https://docs.docker.com/toolbox/toolbox_install_mac/) users must migrate to [Docker for Mac](https://store.docker.com/editions/community/docker-ce-desktop-mac), since it is tested that tidb-docker-compose cannot be started on Docker Toolbox and Docker Machine.
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/pingcap/tidb-docker-compose.git
|
||||
$ cd tidb-docker-compose && docker-compose pull # Get the latest Docker images
|
||||
$ docker-compose up -d
|
||||
$ mysql -h 127.0.0.1 -P 4000 -u root
|
||||
```
|
||||
|
||||
* Access monitor at http://localhost:3000 (login with admin/admin if you want to modify grafana)
|
||||
|
||||
* Access [tidb-vision](https://github.com/pingcap/tidb-vision) at http://localhost:8010
|
||||
|
||||
* Access Spark Web UI at http://localhost:8080
|
||||
and access [TiSpark](https://github.com/pingcap/tispark) through spark://127.0.0.1:7077
|
||||
|
||||
## Docker Swarm
|
||||
|
||||
You can also use Docker Swarm to deploy a TiDB Platform cluster, and then you can scale the service using `docker stack` commands.
|
||||
|
||||
```bash
|
||||
$ docker swarm init # if your docker daemon is not already part of a swarm
|
||||
$ mkdir -p data logs
|
||||
$ docker stack deploy tidb -c docker-swarm.yml
|
||||
$ mysql -h 127.0.0.1 -P 4000 -u root
|
||||
```
|
||||
|
||||
After deploying the stack, you can scale the number of TiDB Server instances in the cluster like this:
|
||||
|
||||
```bash
|
||||
$ docker service scale tidb_tidb=2
|
||||
```
|
||||
|
||||
Docker Swarm automatically load-balances across the containers that implement a scaled service, which you can see if you execute `select @@hostname` several times:
|
||||
|
||||
```bash
|
||||
$ mysql -h 127.0.0.1 -P 4000 -u root -te 'select @@hostname'
|
||||
+--------------+
|
||||
| @@hostname |
|
||||
+--------------+
|
||||
| 340092e0ec9e |
|
||||
+--------------+
|
||||
$ mysql -h 127.0.0.1 -P 4000 -u root -te 'select @@hostname'
|
||||
+--------------+
|
||||
| @@hostname |
|
||||
+--------------+
|
||||
| e6f05ffe6274 |
|
||||
+--------------+
|
||||
$ mysql -h 127.0.0.1 -P 4000 -u root -te 'select @@hostname'
|
||||
+--------------+
|
||||
| @@hostname |
|
||||
+--------------+
|
||||
| 340092e0ec9e |
|
||||
+--------------+
|
||||
```
|
||||
|
||||
If you want to connect to specific backend instances, for example to test concurrency by ensuring that you are connecting to distinct instances of tidb-server, you can use the `docker service ps` command to assemble a hostname for each container:
|
||||
|
||||
```bash
|
||||
$ docker service ps --no-trunc --format '{{.Name}}.{{.ID}}' tidb_tidb
|
||||
tidb_tidb.1.x3sc2sd66a88phsj103ohr6qq
|
||||
tidb_tidb.2.lk53apndq394cega46at853zw
|
||||
```
|
||||
|
||||
To be able to resolve those hostnames, it's easiest to run the MySQL client in a container that has access to the swarm network:
|
||||
|
||||
```bash
|
||||
$ docker run --rm --network=tidb_default arey/mysql-client -h tidb_tidb.1.x3sc2sd66a88phsj103ohr6qq -P 4000 -u root -t -e 'select @@version'
|
||||
+-----------------------------------------+
|
||||
| @@version |
|
||||
+-----------------------------------------+
|
||||
| 5.7.25-TiDB-v3.0.0-beta.1-40-g873d9514b |
|
||||
+-----------------------------------------+
|
||||
```
|
||||
|
||||
To loop through all instances of TiDB Server, you can use a bash loop like this:
|
||||
|
||||
```bash
|
||||
for host in $(docker service ps --no-trunc --format '{{.Name}}.{{.ID}}' tidb_tidb)
|
||||
do docker run --rm --network tidb_default arey/mysql-client \
|
||||
-h "$host" -P 4000 -u root -te "select @@hostname"
|
||||
done
|
||||
```
|
||||
|
||||
To stop all services and remove all containers in the TiDB stack, execute `docker stack rm tidb`.
|
||||
|
||||
## Customize TiDB Cluster
|
||||
|
||||
### Configuration
|
||||
|
||||
* config/pd.toml is copied from [PD repo](https://github.com/pingcap/pd/tree/master/conf)
|
||||
* config/tikv.toml is copied from [TiKV repo](https://github.com/pingcap/tikv/tree/master/etc)
|
||||
* config/tidb.toml is copied from [TiDB repo](https://github.com/pingcap/tidb/tree/master/config)
|
||||
* config/pump.toml is copied from [TiDB-Binlog repo](https://github.com/pingcap/tidb-binlog/tree/master/cmd/pump)
|
||||
* config/drainer.toml is copied from [TiDB-Binlog repo](https://github.com/pingcap/tidb-binlog/tree/master/cmd/drainer)
|
||||
|
||||
If you find these configuration files outdated or mismatch with TiDB version, you can copy these files from their upstream repos and change their metrics addr with `pushgateway:9091`. Also `max-open-files` are configured to `1024` in tikv.toml to simplify quick start on Linux, because setting up ulimit on Linux with docker is quite tedious.
|
||||
|
||||
And config/\*-dashboard.json are copied from [TiDB-Ansible repo](https://github.com/pingcap/tidb-ansible/tree/master/scripts)
|
||||
|
||||
You can customize TiDB cluster configuration by editing docker-compose.yml and the above config files if you know what you're doing.
|
||||
|
||||
But edit these files manually is tedious and error-prone, a template engine is strongly recommended. See the following steps
|
||||
|
||||
### Install Helm
|
||||
|
||||
[Helm](https://helm.sh) is used as a template render engine
|
||||
|
||||
```
|
||||
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get | bash
|
||||
```
|
||||
|
||||
Or if you use Mac, you can use homebrew to install Helm by `brew install kubernetes-helm`
|
||||
|
||||
### Bring up TiDB cluster
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/pingcap/tidb-docker-compose.git
|
||||
$ cd tidb-docker-compose
|
||||
$ vi compose/values.yaml # custom cluster size, docker image, port mapping etc
|
||||
$ helm template compose > generated-docker-compose.yaml
|
||||
$ docker-compose -f generated-docker-compose.yaml pull # Get the latest Docker images
|
||||
$ docker-compose -f generated-docker-compose.yaml up -d
|
||||
|
||||
# If you want to Bring up TiDB cluster with Binlog support
|
||||
$ vi compose/values.yaml # set tidb.enableBinlog to true
|
||||
$ helm template compose > generated-docker-compose-binlog.yaml
|
||||
$ docker-compose -f generated-docker-compose-binlog.yaml up -d # or you can use 'docker-compose-binlog.yml' file directly
|
||||
|
||||
# Note: If the value of drainer.destDBType is "kafka" and
|
||||
# you want to consume the kafka messages outside the docker containers,
|
||||
# please update the kafka.advertisedHostName with your docker host IP in compose/values.yaml and
|
||||
# regenerate the 'generated-docker-compose-binlog.yaml' file
|
||||
```
|
||||
|
||||
You can build docker image yourself for development test.
|
||||
|
||||
* Build from binary
|
||||
|
||||
For pd, tikv, tidb, pump and drainer comment their `image` and `buildPath` fields out. And then copy their binary files to pd/bin/pd-server, tikv/bin/tikv-server, tidb/bin/tidb-server, tidb-binlog/bin/pump and tidb-binlog/bin/drainer.
|
||||
|
||||
These binary files can be built locally or downloaded from https://download.pingcap.org/tidb-latest-linux-amd64.tar.gz
|
||||
|
||||
For tidbVision, comment its `image` and `buildPath` fields out. And then copy tidb-vision repo to tidb-vision/tidb-vision.
|
||||
|
||||
* Build from source
|
||||
|
||||
Leave pd, tikv, tidb and tidbVision `image` field empty and set their `buildPath` field to their source directory.
|
||||
|
||||
For example, if your local tikv source directory is $GOPATH/src/github.com/pingcap/tikv, just set tikv `buildPath` to `$GOPATH/src/github.com/pingcap/tikv`
|
||||
|
||||
*Note:* Compiling tikv from source consumes lots of memory, memory of Docker for Mac needs to be adjusted to greater than 6GB
|
||||
|
||||
[tidb-vision](https://github.com/pingcap/tidb-vision) is a visiualization page of TiDB Cluster, it's WIP project and can be disabled by commenting `tidbVision` out.
|
||||
|
||||
[TiSpark](https://github.com/pingcap/tispark) is a thin layer built for running Apache Spark on top of TiDB/TiKV to answer the complex OLAP queries.
|
||||
|
||||
#### Host network mode (Linux)
|
||||
|
||||
*Note:* Docker for Mac uses a Linux virtual machine, host network mode will not expose any services to host machine. So it's useless to use this mode.
|
||||
|
||||
When using TiKV directly without TiDB, host network mode must be enabled. This way all services use host network without isolation. So you can access all services on the host machine.
|
||||
|
||||
You can enable this mode by setting `networkMode: host` in compose/values.yaml and regenerate docker-compose.yml. When in this mode, prometheus address in configuration files should be changed from `prometheus:9090` to `127.0.0.1:9090`, and pushgateway address should be changed from `pushgateway:9091` to `127.0.0.1:9091`.
|
||||
|
||||
These modification can be done by:
|
||||
```bash
|
||||
# Note: this only needed when networkMode is `host`
|
||||
sed -i 's/pushgateway:9091/127.0.0.1:9091/g' config/*
|
||||
sed -i 's/prometheus:9090/127.0.0.1:9090/g' config/*
|
||||
```
|
||||
|
||||
After all the above is done, you can start tidb-cluster as usual by `docker-compose -f generated-docker-compose.yml up -d`
|
||||
|
||||
### Debug TiDB/TiKV/PD instances
|
||||
Prerequisites:
|
||||
|
||||
Pprof: This is a tool for visualization and analysis of profiling data. Follow [these instructions](https://github.com/google/pprof#building-pprof) to install pprof.
|
||||
|
||||
Graphviz: [http://www.graphviz.org/](http://www.graphviz.org/), used to generate graphic visualizations of profiles.
|
||||
|
||||
* debug TiDB or PD instances
|
||||
|
||||
```bash
|
||||
### Use the following command to starts a web server for graphic visualizations of golang program profiles
|
||||
$ ./tool/container_debug -s pd0 -p /pd-server -w
|
||||
```
|
||||
The above command will produce graphic visualizations of profiles of `pd0` that can be accessed through the browser.
|
||||
|
||||
* debug TiKV instances
|
||||
|
||||
```bash
|
||||
### step 1: select a tikv instance(here is tikv0) and specify the binary path in container to enter debug container
|
||||
$ ./tool/container_debug -s tikv0 -p /tikv-server
|
||||
|
||||
### after step 1, we can generate flame graph for tikv0 in debug container
|
||||
$ ./run_flamegraph.sh 1 # 1 is the tikv0's process id
|
||||
|
||||
### also can fetch tikv0's stack informations with GDB in debug container
|
||||
$ gdb /tikv-server 1 -batch -ex "thread apply all bt" -ex "info threads"
|
||||
```
|
||||
|
||||
### Access TiDB cluster
|
||||
|
||||
TiDB uses ports: 4000(mysql) and 10080(status) by default
|
||||
|
||||
```bash
|
||||
$ mysql -h 127.0.0.1 -P 4000 -u root --comments
|
||||
```
|
||||
|
||||
And Grafana uses port 3000 by default, so open your browser at http://localhost:3000 to view monitor dashboard
|
||||
|
||||
If you enabled tidb-vision, you can view it at http://localhost:8010
|
||||
|
||||
### Access Spark shell and load TiSpark
|
||||
|
||||
Insert some sample data to the TiDB cluster:
|
||||
|
||||
```bash
|
||||
$ docker-compose exec tispark-master bash
|
||||
$ cd /opt/spark/data/tispark-sample-data
|
||||
$ mysql --local-infile=1 -h tidb -P 4000 -u root --comments < dss.ddl
|
||||
```
|
||||
|
||||
After the sample data is loaded into the TiDB cluster, you can access Spark Shell by `docker-compose exec tispark-master /opt/spark/bin/spark-shell`.
|
||||
|
||||
```bash
|
||||
$ docker-compose exec tispark-master /opt/spark/bin/spark-shell
|
||||
...
|
||||
Spark context available as 'sc' (master = local[*], app id = local-1527045927617).
|
||||
Spark session available as 'spark'.
|
||||
Welcome to
|
||||
____ __
|
||||
/ __/__ ___ _____/ /__
|
||||
_\ \/ _ \/ _ `/ __/ '_/
|
||||
/___/ .__/\_,_/_/ /_/\_\ version 2.1.1
|
||||
/_/
|
||||
|
||||
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_172)
|
||||
Type in expressions to have them evaluated.
|
||||
Type :help for more information.
|
||||
|
||||
scala> import org.apache.spark.sql.TiContext
|
||||
...
|
||||
scala> val ti = new TiContext(spark)
|
||||
...
|
||||
scala> ti.tidbMapDatabase("TPCH_001")
|
||||
...
|
||||
scala> spark.sql("select count(*) from lineitem").show
|
||||
+--------+
|
||||
|count(1)|
|
||||
+--------+
|
||||
| 60175|
|
||||
+--------+
|
||||
```
|
||||
|
||||
You can also access Spark with Python or R using the following commands:
|
||||
|
||||
```
|
||||
docker-compose exec tispark-master /opt/spark/bin/pyspark
|
||||
docker-compose exec tispark-master /opt/spark/bin/sparkR
|
||||
```
|
||||
|
||||
More documents about TiSpark can be found [here](https://github.com/pingcap/tispark).
|
||||
@ -1,13 +0,0 @@
|
||||
apiVersion: v1
|
||||
description: tidb-docker-compose
|
||||
name: tidb-docker-compose
|
||||
version: 0.1.0
|
||||
home: https://github.com/pingcap/tidb-docker-compose
|
||||
sources:
|
||||
- https://github.com/pingcap/tidb-docker-compose
|
||||
keywords:
|
||||
- newsql
|
||||
- htap
|
||||
- database
|
||||
- mysql
|
||||
- raft
|
||||
@ -1,62 +0,0 @@
|
||||
{{- define "initial_cluster" }}
|
||||
{{- range until (.Values.pd.size | int) }}
|
||||
{{- if . -}}
|
||||
,
|
||||
{{- end -}}
|
||||
pd{{ . }}=http://
|
||||
{{- if eq $.Values.networkMode "host" -}}
|
||||
127.0.0.1:{{add (add ($.Values.pd.port | int) 10000) . }}
|
||||
{{- else -}}
|
||||
pd{{ . }}:2380
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "pd_list" }}
|
||||
{{- range until (.Values.pd.size | int) }}
|
||||
{{- if . -}}
|
||||
,
|
||||
{{- end -}}
|
||||
{{- if eq $.Values.networkMode "host" -}}
|
||||
127.0.0.1:{{ add ($.Values.pd.port | int) . }}
|
||||
{{- else -}}
|
||||
pd{{ . }}:2379
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "pd_urls" }}
|
||||
{{- range until (.Values.pd.size | int) }}
|
||||
{{- if . -}}
|
||||
,
|
||||
{{- end -}}
|
||||
{{- if eq $.Values.networkMode "host" -}}
|
||||
http://127.0.0.1:{{ add ($.Values.pd.port | int) . }}
|
||||
{{- else -}}
|
||||
http://pd{{ . }}:2379
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "zoo_servers" }}
|
||||
{{- range until (.Values.zookeeper.size | int) }}
|
||||
{{- if eq $.Values.networkMode "host" -}}
|
||||
{{- if . }} {{ end }}server.{{ add . 1 }}=127.0.0.1:{{ add . 2888 }}:{{ add . 3888 }}
|
||||
{{- else -}}
|
||||
{{- if . }} {{ end }}server.{{ add . 1 }}=zoo{{ . }}:2888:3888
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "zoo_connect" }}
|
||||
{{- range until (.Values.zookeeper.size | int) }}
|
||||
{{- if . -}}
|
||||
,
|
||||
{{- end -}}
|
||||
{{- if eq $.Values.networkMode "host" -}}
|
||||
127.0.0.1:{{ add $.Values.zookeeper.port . }}
|
||||
{{- else -}}
|
||||
zoo{{ add . }}:{{ add $.Values.zookeeper.port . }}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
{{- end -}}
|
||||
@ -1,431 +0,0 @@
|
||||
{{- $pdSize := .Values.pd.size | int }}
|
||||
{{- $tikvSize := .Values.tikv.size | int }}
|
||||
{{- $pdPort := .Values.pd.port | int }}
|
||||
{{- $pdPeerPort := add $pdPort 10000 }}
|
||||
{{- $tikvPort := .Values.tikv.port | int -}}
|
||||
{{- $pumpSize := .Values.pump.size | int }}
|
||||
{{- $pumpPort := .Values.pump.port | int }}
|
||||
{{- $zooSize := .Values.zookeeper.size | int }}
|
||||
{{- $zooPort := .Values.zookeeper.port | int }}
|
||||
{{- $kafkaSize := .Values.kafka.size | int }}
|
||||
{{- $kafkaPort := .Values.kafka.port | int }}
|
||||
version: '2.1'
|
||||
|
||||
services:
|
||||
{{- range until $pdSize }}
|
||||
pd{{ . }}:
|
||||
{{- if $.Values.pd.image }}
|
||||
image: {{ $.Values.pd.image }}
|
||||
{{- else }}
|
||||
image: pd:latest
|
||||
build:
|
||||
context: {{ $.Values.pd.buildPath | default "./pd" }}
|
||||
dockerfile: {{ $.Values.pd.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
{{- else }}
|
||||
ports:
|
||||
- "2379"
|
||||
{{- end }}
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- {{ $.Values.dataDir }}:/data
|
||||
- {{ $.Values.logsDir }}:/logs
|
||||
command:
|
||||
- --name=pd{{ . }}
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
- --client-urls=http://0.0.0.0:{{ add $pdPort . }}
|
||||
- --peer-urls=http://0.0.0.0:{{ add $pdPeerPort . }}
|
||||
- --advertise-client-urls=http://127.0.0.1:{{ add $pdPort . }}
|
||||
- --advertise-peer-urls=http://127.0.0.1:{{ add $pdPeerPort . }}
|
||||
{{- else }}
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd{{ . }}:2379
|
||||
- --advertise-peer-urls=http://pd{{ . }}:2380
|
||||
{{- end }}
|
||||
- --initial-cluster={{- template "initial_cluster" $ }}
|
||||
- --data-dir=/data/pd{{ . }}
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd{{ . }}.log
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
{{ end }}
|
||||
|
||||
{{- range until $tikvSize }}
|
||||
tikv{{ . }}:
|
||||
{{- if $.Values.tikv.image }}
|
||||
image: {{ $.Values.tikv.image }}
|
||||
{{- else }}
|
||||
image: tikv:latest
|
||||
build:
|
||||
context: {{ $.Values.tikv.buildPath | default "./tikv" }}
|
||||
dockerfile: {{ $.Values.tikv.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
{{- end }}
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- {{ $.Values.dataDir }}:/data
|
||||
- {{ $.Values.logsDir }}:/logs
|
||||
command:
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
- --addr=0.0.0.0:{{ add $tikvPort . }}
|
||||
- --advertise-addr=127.0.0.1:{{ add $tikvPort . }}
|
||||
{{- else }}
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv{{ . }}:20160
|
||||
{{- end }}
|
||||
- --data-dir=/data/tikv{{ . }}
|
||||
- --pd={{- template "pd_list" $ }}
|
||||
- --config=/tikv.toml
|
||||
- --log-file=/logs/tikv{{ . }}.log
|
||||
depends_on:
|
||||
{{- range until $pdSize }}
|
||||
- "pd{{.}}"
|
||||
{{- end }}
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
{{ end }}
|
||||
|
||||
{{- if .Values.tidb }}
|
||||
{{- if .Values.tidb.enableBinlog }}
|
||||
{{- range until $pumpSize }}
|
||||
pump{{ . }}:
|
||||
{{- if $.Values.pump.image }}
|
||||
image: {{ $.Values.pump.image }}
|
||||
{{- else }}
|
||||
image: tidb-binlog:latest
|
||||
build:
|
||||
context: {{ $.Values.pump.buildPath | default "./tidb-binlog" }}
|
||||
dockerfile: {{ $.Values.pump.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
{{- end }}
|
||||
volumes:
|
||||
- ./config/pump.toml:/pump.toml:ro
|
||||
- {{ $.Values.dataDir }}:/data
|
||||
- {{ $.Values.logsDir }}:/logs
|
||||
command:
|
||||
- /pump
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
- --addr=0.0.0.0:{{ add $pumpPort . }}
|
||||
- --advertise-addr=127.0.0.1:{{ add $pumpPort . }}
|
||||
{{- else }}
|
||||
- --addr=0.0.0.0:8250
|
||||
- --advertise-addr=pump{{ . }}:8250
|
||||
{{- end }}
|
||||
- --data-dir=/data/pump{{ . }}
|
||||
- --log-file=/logs/pump{{ . }}.log
|
||||
- --node-id=pump{{ . }}
|
||||
- --pd-urls={{- template "pd_urls" $ }}
|
||||
- --config=/pump.toml
|
||||
depends_on:
|
||||
{{- range until $pdSize }}
|
||||
- "pd{{.}}"
|
||||
{{- end }}
|
||||
restart: on-failure
|
||||
{{ end }}
|
||||
drainer:
|
||||
{{- if $.Values.drainer.image }}
|
||||
image: {{ $.Values.drainer.image }}
|
||||
{{- else }}
|
||||
image: tidb-binlog:latest
|
||||
build:
|
||||
context: {{ $.Values.drainer.buildPath | default "./tidb-binlog" }}
|
||||
dockerfile: {{ $.Values.drainer.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
{{- end }}
|
||||
volumes:
|
||||
- ./config/drainer.toml:/drainer.toml:ro
|
||||
- {{ $.Values.dataDir }}:/data
|
||||
- {{ $.Values.logsDir }}:/logs
|
||||
command:
|
||||
- /drainer
|
||||
- --addr=0.0.0.0:8249
|
||||
- --data-dir=/data/data.drainer
|
||||
- --log-file=/logs/drainer.log
|
||||
- --pd-urls={{- template "pd_urls" $ }}
|
||||
- --config=/drainer.toml
|
||||
- --initial-commit-ts=0
|
||||
{{- if eq $.Values.drainer.destDBType "kafka" }}
|
||||
- --dest-db-type=kafka
|
||||
{{- end }}
|
||||
depends_on:
|
||||
{{- range until $pdSize }}
|
||||
- "pd{{ . }}"
|
||||
{{- end }}
|
||||
{{- if eq $.Values.drainer.destDBType "kafka" }}
|
||||
{{- range until $kafkaSize }}
|
||||
- "kafka{{ . }}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
restart: on-failure
|
||||
|
||||
{{- if eq $.Values.drainer.destDBType "kafka" }}
|
||||
{{ range until $zooSize }}
|
||||
zoo{{ . }}:
|
||||
image: zookeeper:latest
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
{{- else }}
|
||||
ports:
|
||||
- "{{ add $zooPort . }}:{{ add $zooPort . }}"
|
||||
{{- end }}
|
||||
environment:
|
||||
ZOO_MY_ID: {{ add . 1 }}
|
||||
ZOO_PORT: {{ add $zooPort . }}
|
||||
ZOO_SERVERS: {{ template "zoo_servers" $ }}
|
||||
volumes:
|
||||
- {{ $.Values.dataDir }}/zoo{{ . }}/data:/data
|
||||
- {{ $.Values.dataDir }}/zoo{{ . }}/datalog:/datalog
|
||||
restart: on-failure
|
||||
{{ end }}
|
||||
|
||||
{{- range until $kafkaSize }}
|
||||
kafka{{ . }}:
|
||||
image: {{ $.Values.kafka.image }}
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
{{- else }}
|
||||
ports:
|
||||
- "{{ add . $kafkaPort }}:{{ add . $kafkaPort }}"
|
||||
{{- end }}
|
||||
environment:
|
||||
KAFKA_BROKER_ID: {{ add . 1 }}
|
||||
KAFKA_LOG_DIRS: /data/kafka-logs
|
||||
{{- if $.Values.kafka.advertisedHostName }}
|
||||
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://{{ $.Values.kafka.advertisedHostName }}:{{ add . $kafkaPort }}
|
||||
{{- else }}
|
||||
{{- if eq $.Values.networkMode "host" }}
|
||||
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:{{ add . $kafkaPort }}
|
||||
{{- else }}
|
||||
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka{{ . }}:{{ add . $kafkaPort }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:{{ add . $kafkaPort }}
|
||||
KAFKA_ZOOKEEPER_CONNECT: {{ template "zoo_connect" $ }}
|
||||
volumes:
|
||||
- {{ $.Values.dataDir }}/kafka-logs/kafka{{ . }}:/data/kafka-logs
|
||||
- {{ $.Values.logsDir }}/kafka{{ . }}:/opt/kafka/logs
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
depends_on:
|
||||
{{- range until $zooSize }}
|
||||
- "zoo{{ . }}"
|
||||
{{- end }}
|
||||
restart: on-failure
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
{{ end }}
|
||||
|
||||
{{- if .Values.tidb }}
|
||||
tidb:
|
||||
{{- if .Values.tidb.image }}
|
||||
image: {{ .Values.tidb.image }}
|
||||
{{- else }}
|
||||
image: tidb:latest
|
||||
build:
|
||||
context: {{ .Values.tidb.buildPath | default "./tidb" }}
|
||||
dockerfile: {{ .Values.tidb.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
{{- else }}
|
||||
ports:
|
||||
- "{{ .Values.tidb.mysqlPort }}:4000"
|
||||
- "{{ .Values.tidb.statusPort }}:10080"
|
||||
{{- end }}
|
||||
volumes:
|
||||
- ./config/tidb.toml:/tidb.toml:ro
|
||||
- {{ .Values.logsDir }}:/logs
|
||||
command:
|
||||
- --store=tikv
|
||||
- --path={{- template "pd_list" $ }}
|
||||
- --config=/tidb.toml
|
||||
- --log-file=/logs/tidb.log
|
||||
- --advertise-address=tidb
|
||||
{{- if .Values.tidb.enableBinlog }}
|
||||
- --enable-binlog=true
|
||||
{{- end }}
|
||||
depends_on:
|
||||
{{- range until $tikvSize }}
|
||||
- "tikv{{.}}"
|
||||
{{- end }}
|
||||
{{- if .Values.tidb.enableBinlog }}
|
||||
{{- range until $pumpSize }}
|
||||
- "pump{{.}}"
|
||||
{{- end }}
|
||||
{{- end }}
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
{{ end }}
|
||||
|
||||
{{- if .Values.tispark }}
|
||||
tispark-master:
|
||||
{{- if .Values.tispark.image }}
|
||||
image: {{ .Values.tispark.image }}
|
||||
{{- else }}
|
||||
image: tispark:latest
|
||||
build:
|
||||
context: {{ .Values.tispark.buildPath | default "./tispark" }}
|
||||
dockerfile: {{ .Values.tispark.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
command:
|
||||
- /opt/spark/sbin/start-master.sh
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_MASTER_PORT: {{ .Values.tispark.masterPort }}
|
||||
SPARK_MASTER_WEBUI_PORT: {{ .Values.tispark.webuiPort }}
|
||||
ports:
|
||||
- "{{ .Values.tispark.masterPort }}:7077"
|
||||
- "{{ .Values.tispark.webuiPort }}:8080"
|
||||
depends_on:
|
||||
{{- range until $tikvSize }}
|
||||
- "tikv{{.}}"
|
||||
{{- end }}
|
||||
restart: on-failure
|
||||
{{- range until ( .Values.tispark.workerCount | int ) }}
|
||||
tispark-slave{{ . }}:
|
||||
{{- if $.Values.tispark.image }}
|
||||
image: {{ $.Values.tispark.image }}
|
||||
{{- else }}
|
||||
image: tispark:latest
|
||||
build:
|
||||
context: {{ $.Values.tispark.buildPath | default "./tispark" }}
|
||||
dockerfile: {{ $.Values.tispark.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
command:
|
||||
- /opt/spark/sbin/start-slave.sh
|
||||
- spark://tispark-master:7077
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_WORKER_WEBUI_PORT: {{ add $.Values.tispark.workerWebUIPort . }}
|
||||
ports:
|
||||
- "{{ add $.Values.tispark.workerWebUIPort . }}:{{ add $.Values.tispark.workerWebUIPort . }}"
|
||||
depends_on:
|
||||
- tispark-master
|
||||
restart: on-failure
|
||||
{{- end }}
|
||||
{{ end }}
|
||||
|
||||
{{- if .Values.tidbVision }}
|
||||
tidb-vision:
|
||||
{{- if .Values.tidbVision.image }}
|
||||
image: {{ .Values.tidbVision.image }}
|
||||
{{- else }}
|
||||
image: tidb-vision:latest
|
||||
build:
|
||||
context: {{ .Values.tidbVision.buildPath | default "./tidb-vision" }}
|
||||
dockerfile: {{ .Values.tidbVision.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
environment:
|
||||
PD_ENDPOINT: {{if eq .Values.networkMode "host"}}127.0.0.1:{{.Values.pd.port}}{{else}}pd0:2379{{end}}
|
||||
{{- if eq .Values.networkMode "host" }}
|
||||
PORT: {{ .Values.tidbVision.port }}
|
||||
network_mode: host
|
||||
{{- else }}
|
||||
ports:
|
||||
- "{{ .Values.tidbVision.port }}:8010"
|
||||
{{- end }}
|
||||
restart: on-failure
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.prometheus }}
|
||||
pushgateway:
|
||||
image: {{ .Values.pushgateway.image }}
|
||||
{{- if eq .Values.networkMode "host" }}
|
||||
command:
|
||||
- --web.listen-address=0.0.0.0:{{.Values.pushgateway.port}}
|
||||
- --log.level={{ .Values.pushgateway.logLevel }}
|
||||
network_mode: host
|
||||
{{- else }}
|
||||
command:
|
||||
- --log.level={{ .Values.pushgateway.logLevel }}
|
||||
{{- end }}
|
||||
restart: on-failure
|
||||
|
||||
prometheus:
|
||||
user: root
|
||||
image: {{ .Values.prometheus.image }}
|
||||
command:
|
||||
- --log.level={{ .Values.prometheus.logLevel }}
|
||||
- --storage.tsdb.path=/data/prometheus
|
||||
- --config.file=/etc/prometheus/prometheus.yml
|
||||
{{- if eq .Values.networkMode "host" }}
|
||||
- --web.listen-address=0.0.0.0:{{.Values.prometheus.port}}
|
||||
network_mode: host
|
||||
{{- else }}
|
||||
ports:
|
||||
- "{{ .Values.prometheus.port }}:9090"
|
||||
{{- end }}
|
||||
volumes:
|
||||
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
- ./config/pd.rules.yml:/etc/prometheus/pd.rules.yml:ro
|
||||
- ./config/tikv.rules.yml:/etc/prometheus/tikv.rules.yml:ro
|
||||
- ./config/tidb.rules.yml:/etc/prometheus/tidb.rules.yml:ro
|
||||
- {{ .Values.dataDir }}:/data
|
||||
restart: on-failure
|
||||
{{- end }}
|
||||
|
||||
{{- if .Values.grafana }}
|
||||
grafana:
|
||||
image: {{ .Values.grafana.image }}
|
||||
user: "0"
|
||||
{{- if eq .Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
environment:
|
||||
GF_SERVER_HTTP_PORT: {{ .Values.grafana.port }}
|
||||
GF_LOG_LEVEL: {{ .Values.grafana.logLevel }}
|
||||
{{- else }}
|
||||
environment:
|
||||
GF_LOG_LEVEL: {{ .Values.grafana.logLevel }}
|
||||
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
|
||||
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
|
||||
ports:
|
||||
- "{{ .Values.grafana.port }}:3000"
|
||||
{{- end }}
|
||||
volumes:
|
||||
- ./config/grafana:/etc/grafana
|
||||
- ./config/dashboards:/tmp/dashboards
|
||||
- ./data/grafana:/var/lib/grafana
|
||||
restart: on-failure
|
||||
dashboard-installer:
|
||||
{{- if .Values.dashboardInstaller.image }}
|
||||
image: {{ .Values.dashboardInstaller.image }}
|
||||
{{- else }}
|
||||
image: tidb-dashboard-installer:latest
|
||||
build:
|
||||
context: {{ .Values.dashboardInstaller.buildPath | default "./dashboard-installer" }}
|
||||
dockerfile: {{ .Values.dashboardInstaller.dockerfile | default "Dockerfile" }}
|
||||
{{- end }}
|
||||
{{- if eq .Values.networkMode "host" }}
|
||||
network_mode: host
|
||||
command: ["127.0.0.1:{{.Values.grafana.port}}"]
|
||||
{{- else }}
|
||||
command: ["grafana:3000"]
|
||||
{{- end }}
|
||||
restart: on-failure
|
||||
{{- end -}}
|
||||
@ -1,125 +0,0 @@
|
||||
dataDir: ./data
|
||||
logsDir: ./logs
|
||||
# supported networkMode: bridge | host
|
||||
# host network mode is useless on Mac
|
||||
networkMode: bridge
|
||||
|
||||
pd:
|
||||
size: 3
|
||||
image: pingcap/pd:latest
|
||||
|
||||
# If you want to build pd image from source, leave image empty and specify pd source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./pd
|
||||
# dockerfile: Dockerfile
|
||||
# when network_mode is host, pd port ranges [port, port+size)
|
||||
port: 2379
|
||||
|
||||
tikv:
|
||||
size: 3
|
||||
image: pingcap/tikv:latest
|
||||
|
||||
# If you want to build tikv image from source, leave image empty and specify tikv source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./tikv
|
||||
# dockerfile: Dockerfile
|
||||
# when network mode is host, tikv port ranges [port, port+size)
|
||||
port: 20160
|
||||
|
||||
# comment this section out if you don't need SQL layer and want to use TiKV directly
|
||||
# when using TiKV directly, networkMode must be set to `host`
|
||||
tidb:
|
||||
image: pingcap/tidb:latest
|
||||
|
||||
# If you want to build tidb image from source, leave image empty and specify tidb source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./tidb
|
||||
# dockerfile: Dockerfile
|
||||
mysqlPort: "4000"
|
||||
statusPort: "10080"
|
||||
enableBinlog: false
|
||||
|
||||
pump:
|
||||
size: 3
|
||||
image: pingcap/tidb-binlog:latest
|
||||
|
||||
# If you want to build pump image from source, leave image empty and specify pump source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./pump
|
||||
# dockerfile: Dockerfile
|
||||
# when network_mode is host, pump port ranges [port, port+size)
|
||||
port: 8250
|
||||
|
||||
drainer:
|
||||
image: pingcap/tidb-binlog:latest
|
||||
|
||||
# If you want to build drainer image from source, leave image empty and specify drainer source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./drainer
|
||||
# dockerfile: Dockerfile
|
||||
destDBType: "kafka"
|
||||
|
||||
zookeeper:
|
||||
size: 3
|
||||
image: zookeeper:latest
|
||||
port: 2181
|
||||
|
||||
kafka:
|
||||
size: 3
|
||||
image: wurstmeister/kafka:2.12-2.1.1
|
||||
# If you want to consume the kafka messages outside the docker containers,
|
||||
# Please update the advertisedHostName with your docker host IP
|
||||
advertisedHostName:
|
||||
port: 9092
|
||||
|
||||
tispark:
|
||||
image: pingcap/tispark:latest
|
||||
|
||||
# If you want to build tidb image from source, leave image empty and specify tidb source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./tidb
|
||||
# dockerfile: Dockerfile
|
||||
buildPath: ./tispark
|
||||
dockerfile: Dockerfile
|
||||
|
||||
masterPort: 7077
|
||||
webuiPort: 8080
|
||||
workerCount: 1
|
||||
# slave web ui port will be workerWebUIPort ~ workerWebUIPort+workerCount-1
|
||||
workerWebUIPort: 38081
|
||||
|
||||
# comment this out to disable tidb-vision
|
||||
tidbVision:
|
||||
image: pingcap/tidb-vision:latest
|
||||
|
||||
# If you want to build tidb-vision image from source, leave image empty and specify tidb-vision source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./tidb-vision
|
||||
# dockerfile: Dockerfile
|
||||
port: "8010"
|
||||
|
||||
# comment following monitor components sections out to disable monitor
|
||||
grafana:
|
||||
image: grafana/grafana:5.3.0
|
||||
port: "3000"
|
||||
logLevel: error
|
||||
|
||||
pushgateway:
|
||||
image: prom/pushgateway:v0.3.1
|
||||
port: "9091"
|
||||
logLevel: error
|
||||
|
||||
prometheus:
|
||||
image: prom/prometheus:v2.2.1
|
||||
port: "9090"
|
||||
logLevel: error
|
||||
|
||||
# This is used to import tidb monitor dashboard templates to grafana
|
||||
# this container runs only once and keep running until templates imported successfully
|
||||
dashboardInstaller:
|
||||
image: pingcap/tidb-dashboard-installer:v2.0.0
|
||||
|
||||
# If you want to build tidb-dashboard-installer image from source, leave image empty and specify tidb-dashboard-installer source directory
|
||||
# and its dockerfile name
|
||||
# buildPath: ./dashboard-installer
|
||||
# dockerfile: Dockerfile
|
||||
@ -1,9 +0,0 @@
|
||||
# TiDB dashboard
|
||||
With Grafana v5.x or later, we can use provisioning feature to statically provision datasources and dashboards. No need to use scripts to configure Grafana.
|
||||
|
||||
The JSON files in dashboards are copied from [tidb-ansible](https://github.com/pingcap/tidb-ansible/tree/master/scripts), and need to replace variables in the json file(It was did by python file before).
|
||||
|
||||
It is used in [tidb-docker-compose](https://github.com/pingcap/tidb-docker-compose) and [tidb-operator](https://github.com/pingcap/tidb-operator).
|
||||
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,101 +0,0 @@
|
||||
# drainer Configuration.
|
||||
|
||||
# addr (i.e. 'host:port') to listen on for drainer connections
|
||||
# will register this addr into etcd
|
||||
# addr = "127.0.0.1:8249"
|
||||
|
||||
# the interval time (in seconds) of detect pumps' status
|
||||
detect-interval = 10
|
||||
|
||||
# drainer meta data directory path
|
||||
data-dir = "data.drainer"
|
||||
|
||||
# a comma separated list of PD endpoints
|
||||
pd-urls = "http://127.0.0.1:2379"
|
||||
|
||||
# Use the specified compressor to compress payload between pump and drainer
|
||||
compressor = ""
|
||||
|
||||
#[security]
|
||||
# Path of file that contains list of trusted SSL CAs for connection with cluster components.
|
||||
# ssl-ca = "/path/to/ca.pem"
|
||||
# Path of file that contains X509 certificate in PEM format for connection with cluster components.
|
||||
# ssl-cert = "/path/to/pump.pem"
|
||||
# Path of file that contains X509 key in PEM format for connection with cluster components.
|
||||
# ssl-key = "/path/to/pump-key.pem"
|
||||
|
||||
# syncer Configuration.
|
||||
[syncer]
|
||||
|
||||
# Assume the upstream sql-mode.
|
||||
# If this is setted , will use the same sql-mode to parse DDL statment, and set the same sql-mode at downstream when db-type is mysql.
|
||||
# If this is not setted, it will not set any sql-mode.
|
||||
# sql-mode = "STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION"
|
||||
|
||||
# number of binlog events in a transaction batch
|
||||
txn-batch = 20
|
||||
|
||||
# work count to execute binlogs
|
||||
# if the latency between drainer and downstream(mysql or tidb) are too high, you might want to increase this
|
||||
# to get higher throughput by higher concurrent write to the downstream
|
||||
worker-count = 16
|
||||
|
||||
disable-dispatch = false
|
||||
|
||||
# safe mode will split update to delete and insert
|
||||
safe-mode = false
|
||||
|
||||
# downstream storage, equal to --dest-db-type
|
||||
# valid values are "mysql", "file", "tidb", "flash", "kafka"
|
||||
db-type = "kafka"
|
||||
|
||||
# disable sync these schema
|
||||
ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql"
|
||||
|
||||
##replicate-do-db priority over replicate-do-table if have same db name
|
||||
##and we support regex expression , start with '~' declare use regex expression.
|
||||
#
|
||||
#replicate-do-db = ["~^b.*","s1"]
|
||||
|
||||
#[[syncer.replicate-do-table]]
|
||||
#db-name ="test"
|
||||
#tbl-name = "log"
|
||||
|
||||
#[[syncer.replicate-do-table]]
|
||||
#db-name ="test"
|
||||
#tbl-name = "~^a.*"
|
||||
|
||||
# disable sync these table
|
||||
#[[syncer.ignore-table]]
|
||||
#db-name = "test"
|
||||
#tbl-name = "log"
|
||||
|
||||
# the downstream mysql protocol database
|
||||
#[syncer.to]
|
||||
#host = "127.0.0.1"
|
||||
#user = "root"
|
||||
#password = ""
|
||||
#port = 3306
|
||||
|
||||
[syncer.to.checkpoint]
|
||||
# you can uncomment this to change the database to save checkpoint when the downstream is mysql or tidb
|
||||
#schema = "tidb_binlog"
|
||||
|
||||
# Uncomment this if you want to use file as db-type.
|
||||
#[syncer.to]
|
||||
# directory to save binlog file, default same as data-dir(save checkpoint file) if this is not configured.
|
||||
# dir = "data.drainer"
|
||||
|
||||
|
||||
# when db-type is kafka, you can uncomment this to config the down stream kafka, it will be the globle config kafka default
|
||||
[syncer.to]
|
||||
# only need config one of zookeeper-addrs and kafka-addrs, will get kafka address if zookeeper-addrs is configed.
|
||||
# zookeeper-addrs = "127.0.0.1:2181"
|
||||
kafka-addrs = "kafka0:9092,kafka1:9093,kafka2:9094"
|
||||
kafka-version = "2.1.1"
|
||||
kafka-max-messages = 1024
|
||||
#
|
||||
#
|
||||
# the topic name drainer will push msg, the default name is <cluster-id>_obinlog
|
||||
# be careful don't use the same name if run multi drainer instances
|
||||
# topic-name = ""
|
||||
@ -1,7 +0,0 @@
|
||||
{
|
||||
"name": "tidb-cluster",
|
||||
"type": "prometheus",
|
||||
"url": "http://prometheus:9090",
|
||||
"access": "proxy",
|
||||
"basicAuth": false
|
||||
}
|
||||
@ -1,556 +0,0 @@
|
||||
##################### Grafana Configuration Defaults #####################
|
||||
#
|
||||
# Do not modify this file in grafana installs
|
||||
#
|
||||
|
||||
# possible values : production, development
|
||||
app_mode = production
|
||||
|
||||
# instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty
|
||||
instance_name = ${HOSTNAME}
|
||||
|
||||
#################################### Paths ###############################
|
||||
[paths]
|
||||
# Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used)
|
||||
data = data
|
||||
|
||||
# Temporary files in `data` directory older than given duration will be removed
|
||||
temp_data_lifetime = 24h
|
||||
|
||||
# Directory where grafana can store logs
|
||||
logs = data/log
|
||||
|
||||
# Directory where grafana will automatically scan and look for plugins
|
||||
plugins = data/plugins
|
||||
|
||||
# folder that contains provisioning config files that grafana will apply on startup and while running.
|
||||
provisioning = conf/provisioning
|
||||
|
||||
#################################### Server ##############################
|
||||
[server]
|
||||
# Protocol (http, https, socket)
|
||||
protocol = http
|
||||
|
||||
# The ip address to bind to, empty will bind to all interfaces
|
||||
http_addr =
|
||||
|
||||
# The http port to use
|
||||
http_port = 3000
|
||||
|
||||
# The public facing domain name used to access grafana from a browser
|
||||
domain = localhost
|
||||
|
||||
# Redirect to correct domain if host header does not match domain
|
||||
# Prevents DNS rebinding attacks
|
||||
enforce_domain = false
|
||||
|
||||
# The full public facing url
|
||||
root_url = %(protocol)s://%(domain)s:%(http_port)s/
|
||||
|
||||
# Log web requests
|
||||
router_logging = false
|
||||
|
||||
# the path relative working path
|
||||
static_root_path = public
|
||||
|
||||
# enable gzip
|
||||
enable_gzip = false
|
||||
|
||||
# https certs & key file
|
||||
cert_file =
|
||||
cert_key =
|
||||
|
||||
# Unix socket path
|
||||
socket = /tmp/grafana.sock
|
||||
|
||||
#################################### Database ############################
|
||||
[database]
|
||||
# You can configure the database connection by specifying type, host, name, user and password
|
||||
# as separate properties or as on string using the url property.
|
||||
|
||||
# Either "mysql", "postgres" or "sqlite3", it's your choice
|
||||
type = sqlite3
|
||||
host = 127.0.0.1:3306
|
||||
name = grafana
|
||||
user = root
|
||||
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
|
||||
password =
|
||||
# Use either URL or the previous fields to configure the database
|
||||
# Example: mysql://user:secret@host:port/database
|
||||
url =
|
||||
|
||||
# Max idle conn setting default is 2
|
||||
max_idle_conn = 2
|
||||
|
||||
# Max conn setting default is 0 (mean not set)
|
||||
max_open_conn =
|
||||
|
||||
# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours)
|
||||
conn_max_lifetime = 14400
|
||||
|
||||
# Set to true to log the sql calls and execution times.
|
||||
log_queries =
|
||||
|
||||
# For "postgres", use either "disable", "require" or "verify-full"
|
||||
# For "mysql", use either "true", "false", or "skip-verify".
|
||||
ssl_mode = disable
|
||||
|
||||
ca_cert_path =
|
||||
client_key_path =
|
||||
client_cert_path =
|
||||
server_cert_name =
|
||||
|
||||
# For "sqlite3" only, path relative to data_path setting
|
||||
path = grafana.db
|
||||
|
||||
#################################### Session #############################
|
||||
[session]
|
||||
# Either "memory", "file", "redis", "mysql", "postgres", "memcache", default is "file"
|
||||
provider = file
|
||||
|
||||
# Provider config options
|
||||
# memory: not have any config yet
|
||||
# file: session dir path, is relative to grafana data_path
|
||||
# redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana`
|
||||
# postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable
|
||||
# mysql: go-sql-driver/mysql dsn config string, examples:
|
||||
# `user:password@tcp(127.0.0.1:3306)/database_name`
|
||||
# `user:password@unix(/var/run/mysqld/mysqld.sock)/database_name`
|
||||
# memcache: 127.0.0.1:11211
|
||||
|
||||
|
||||
provider_config = sessions
|
||||
|
||||
# Session cookie name
|
||||
cookie_name = grafana_sess
|
||||
|
||||
# If you use session in https only, default is false
|
||||
cookie_secure = false
|
||||
|
||||
# Session life time, default is 86400
|
||||
session_life_time = 86400
|
||||
gc_interval_time = 86400
|
||||
|
||||
# Connection Max Lifetime default is 14400 (means 14400 seconds or 4 hours)
|
||||
conn_max_lifetime = 14400
|
||||
|
||||
#################################### Data proxy ###########################
|
||||
[dataproxy]
|
||||
|
||||
# This enables data proxy logging, default is false
|
||||
logging = false
|
||||
|
||||
#################################### Analytics ###########################
|
||||
[analytics]
|
||||
# Server reporting, sends usage counters to stats.grafana.org every 24 hours.
|
||||
# No ip addresses are being tracked, only simple counters to track
|
||||
# running instances, dashboard and error counts. It is very helpful to us.
|
||||
# Change this option to false to disable reporting.
|
||||
reporting_enabled = true
|
||||
|
||||
# Set to false to disable all checks to https://grafana.com
|
||||
# for new versions (grafana itself and plugins), check is used
|
||||
# in some UI views to notify that grafana or plugin update exists
|
||||
# This option does not cause any auto updates, nor send any information
|
||||
# only a GET request to https://grafana.com to get latest versions
|
||||
check_for_updates = true
|
||||
|
||||
# Google Analytics universal tracking code, only enabled if you specify an id here
|
||||
google_analytics_ua_id =
|
||||
|
||||
# Google Tag Manager ID, only enabled if you specify an id here
|
||||
google_tag_manager_id =
|
||||
|
||||
#################################### Security ############################
|
||||
[security]
|
||||
# default admin user, created on startup
|
||||
admin_user = admin
|
||||
|
||||
# default admin password, can be changed before first start of grafana, or in profile settings
|
||||
admin_password = admin
|
||||
|
||||
# used for signing
|
||||
secret_key = SW2YcwTIb9zpOOhoPsMm
|
||||
|
||||
# Auto-login remember days
|
||||
login_remember_days = 7
|
||||
cookie_username = grafana_user
|
||||
cookie_remember_name = grafana_remember
|
||||
|
||||
# disable gravatar profile images
|
||||
disable_gravatar = false
|
||||
|
||||
# data source proxy whitelist (ip_or_domain:port separated by spaces)
|
||||
data_source_proxy_whitelist =
|
||||
|
||||
# disable protection against brute force login attempts
|
||||
disable_brute_force_login_protection = false
|
||||
|
||||
#################################### Snapshots ###########################
|
||||
[snapshots]
|
||||
# snapshot sharing options
|
||||
external_enabled = true
|
||||
external_snapshot_url = https://snapshots-origin.raintank.io
|
||||
external_snapshot_name = Publish to snapshot.raintank.io
|
||||
|
||||
# remove expired snapshot
|
||||
snapshot_remove_expired = true
|
||||
|
||||
#################################### Dashboards ##################
|
||||
|
||||
[dashboards]
|
||||
# Number dashboard versions to keep (per dashboard). Default: 20, Minimum: 1
|
||||
versions_to_keep = 20
|
||||
|
||||
#################################### Users ###############################
|
||||
[users]
|
||||
# disable user signup / registration
|
||||
allow_sign_up = false
|
||||
|
||||
# Allow non admin users to create organizations
|
||||
allow_org_create = false
|
||||
|
||||
# Set to true to automatically assign new users to the default organization (id 1)
|
||||
auto_assign_org = true
|
||||
|
||||
# Set this value to automatically add new users to the provided organization (if auto_assign_org above is set to true)
|
||||
auto_assign_org_id = 1
|
||||
|
||||
# Default role new users will be automatically assigned (if auto_assign_org above is set to true)
|
||||
auto_assign_org_role = Viewer
|
||||
|
||||
# Require email validation before sign up completes
|
||||
verify_email_enabled = false
|
||||
|
||||
# Background text for the user field on the login page
|
||||
login_hint = email or username
|
||||
|
||||
# Default UI theme ("dark" or "light")
|
||||
default_theme = dark
|
||||
|
||||
# External user management
|
||||
external_manage_link_url =
|
||||
external_manage_link_name =
|
||||
external_manage_info =
|
||||
|
||||
# Viewers can edit/inspect dashboard settings in the browser. But not save the dashboard.
|
||||
viewers_can_edit = false
|
||||
|
||||
[auth]
|
||||
# Set to true to disable (hide) the login form, useful if you use OAuth
|
||||
disable_login_form = false
|
||||
|
||||
# Set to true to disable the signout link in the side menu. useful if you use auth.proxy
|
||||
disable_signout_menu = false
|
||||
|
||||
# URL to redirect the user to after sign out
|
||||
signout_redirect_url =
|
||||
|
||||
#################################### Anonymous Auth ######################
|
||||
[auth.anonymous]
|
||||
# enable anonymous access
|
||||
enabled = true
|
||||
|
||||
# specify organization name that should be used for unauthenticated users
|
||||
org_name = Main Org.
|
||||
|
||||
# specify role for unauthenticated users
|
||||
org_role = Viewer
|
||||
|
||||
#################################### Github Auth #########################
|
||||
[auth.github]
|
||||
enabled = false
|
||||
allow_sign_up = true
|
||||
client_id = some_id
|
||||
client_secret = some_secret
|
||||
scopes = user:email,read:org
|
||||
auth_url = https://github.com/login/oauth/authorize
|
||||
token_url = https://github.com/login/oauth/access_token
|
||||
api_url = https://api.github.com/user
|
||||
team_ids =
|
||||
allowed_organizations =
|
||||
|
||||
#################################### GitLab Auth #########################
|
||||
[auth.gitlab]
|
||||
enabled = false
|
||||
allow_sign_up = true
|
||||
client_id = some_id
|
||||
client_secret = some_secret
|
||||
scopes = api
|
||||
auth_url = https://gitlab.com/oauth/authorize
|
||||
token_url = https://gitlab.com/oauth/token
|
||||
api_url = https://gitlab.com/api/v4
|
||||
allowed_groups =
|
||||
|
||||
#################################### Google Auth #########################
|
||||
[auth.google]
|
||||
enabled = false
|
||||
allow_sign_up = true
|
||||
client_id = some_client_id
|
||||
client_secret = some_client_secret
|
||||
scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email
|
||||
auth_url = https://accounts.google.com/o/oauth2/auth
|
||||
token_url = https://accounts.google.com/o/oauth2/token
|
||||
api_url = https://www.googleapis.com/oauth2/v1/userinfo
|
||||
allowed_domains =
|
||||
hosted_domain =
|
||||
|
||||
#################################### Grafana.com Auth ####################
|
||||
# legacy key names (so they work in env variables)
|
||||
[auth.grafananet]
|
||||
enabled = false
|
||||
allow_sign_up = true
|
||||
client_id = some_id
|
||||
client_secret = some_secret
|
||||
scopes = user:email
|
||||
allowed_organizations =
|
||||
|
||||
[auth.grafana_com]
|
||||
enabled = false
|
||||
allow_sign_up = true
|
||||
client_id = some_id
|
||||
client_secret = some_secret
|
||||
scopes = user:email
|
||||
allowed_organizations =
|
||||
|
||||
#################################### Generic OAuth #######################
|
||||
[auth.generic_oauth]
|
||||
name = OAuth
|
||||
enabled = false
|
||||
allow_sign_up = true
|
||||
client_id = some_id
|
||||
client_secret = some_secret
|
||||
scopes = user:email
|
||||
email_attribute_name = email:primary
|
||||
auth_url =
|
||||
token_url =
|
||||
api_url =
|
||||
team_ids =
|
||||
allowed_organizations =
|
||||
tls_skip_verify_insecure = false
|
||||
tls_client_cert =
|
||||
tls_client_key =
|
||||
tls_client_ca =
|
||||
|
||||
#################################### Basic Auth ##########################
|
||||
[auth.basic]
|
||||
enabled = true
|
||||
|
||||
#################################### Auth Proxy ##########################
|
||||
[auth.proxy]
|
||||
enabled = false
|
||||
header_name = X-WEBAUTH-USER
|
||||
header_property = username
|
||||
auto_sign_up = true
|
||||
ldap_sync_ttl = 60
|
||||
whitelist =
|
||||
|
||||
#################################### Auth LDAP ###########################
|
||||
[auth.ldap]
|
||||
enabled = false
|
||||
config_file = /etc/grafana/ldap.toml
|
||||
allow_sign_up = true
|
||||
|
||||
#################################### SMTP / Emailing #####################
|
||||
[smtp]
|
||||
enabled = false
|
||||
host = localhost:25
|
||||
user =
|
||||
# If the password contains # or ; you have to wrap it with triple quotes. Ex """#password;"""
|
||||
password =
|
||||
cert_file =
|
||||
key_file =
|
||||
skip_verify = false
|
||||
from_address = admin@grafana.localhost
|
||||
from_name = Grafana
|
||||
ehlo_identity =
|
||||
|
||||
[emails]
|
||||
welcome_email_on_sign_up = false
|
||||
templates_pattern = emails/*.html
|
||||
|
||||
#################################### Logging ##########################
|
||||
[log]
|
||||
# Either "console", "file", "syslog". Default is console and file
|
||||
# Use space to separate multiple modes, e.g. "console file"
|
||||
mode = console file
|
||||
|
||||
# Either "debug", "info", "warn", "error", "critical", default is "info"
|
||||
level = info
|
||||
|
||||
# optional settings to set different levels for specific loggers. Ex filters = sqlstore:debug
|
||||
filters =
|
||||
|
||||
# For "console" mode only
|
||||
[log.console]
|
||||
level =
|
||||
|
||||
# log line format, valid options are text, console and json
|
||||
format = console
|
||||
|
||||
# For "file" mode only
|
||||
[log.file]
|
||||
level =
|
||||
|
||||
# log line format, valid options are text, console and json
|
||||
format = text
|
||||
|
||||
# This enables automated log rotate(switch of following options), default is true
|
||||
log_rotate = true
|
||||
|
||||
# Max line number of single file, default is 1000000
|
||||
max_lines = 1000000
|
||||
|
||||
# Max size shift of single file, default is 28 means 1 << 28, 256MB
|
||||
max_size_shift = 28
|
||||
|
||||
# Segment log daily, default is true
|
||||
daily_rotate = true
|
||||
|
||||
# Expired days of log file(delete after max days), default is 7
|
||||
max_days = 7
|
||||
|
||||
[log.syslog]
|
||||
level =
|
||||
|
||||
# log line format, valid options are text, console and json
|
||||
format = text
|
||||
|
||||
# Syslog network type and address. This can be udp, tcp, or unix. If left blank, the default unix endpoints will be used.
|
||||
network =
|
||||
address =
|
||||
|
||||
# Syslog facility. user, daemon and local0 through local7 are valid.
|
||||
facility =
|
||||
|
||||
# Syslog tag. By default, the process' argv[0] is used.
|
||||
tag =
|
||||
|
||||
#################################### Usage Quotas ########################
|
||||
[quota]
|
||||
enabled = false
|
||||
|
||||
#### set quotas to -1 to make unlimited. ####
|
||||
# limit number of users per Org.
|
||||
org_user = 10
|
||||
|
||||
# limit number of dashboards per Org.
|
||||
org_dashboard = 100
|
||||
|
||||
# limit number of data_sources per Org.
|
||||
org_data_source = 10
|
||||
|
||||
# limit number of api_keys per Org.
|
||||
org_api_key = 10
|
||||
|
||||
# limit number of orgs a user can create.
|
||||
user_org = 10
|
||||
|
||||
# Global limit of users.
|
||||
global_user = -1
|
||||
|
||||
# global limit of orgs.
|
||||
global_org = -1
|
||||
|
||||
# global limit of dashboards
|
||||
global_dashboard = -1
|
||||
|
||||
# global limit of api_keys
|
||||
global_api_key = -1
|
||||
|
||||
# global limit on number of logged in users.
|
||||
global_session = -1
|
||||
|
||||
#################################### Alerting ############################
|
||||
[alerting]
|
||||
# Disable alerting engine & UI features
|
||||
enabled = true
|
||||
# Makes it possible to turn off alert rule execution but alerting UI is visible
|
||||
execute_alerts = true
|
||||
|
||||
# Default setting for new alert rules. Defaults to categorize error and timeouts as alerting. (alerting, keep_state)
|
||||
error_or_timeout = alerting
|
||||
|
||||
# Default setting for how Grafana handles nodata or null values in alerting. (alerting, no_data, keep_state, ok)
|
||||
nodata_or_nullvalues = no_data
|
||||
|
||||
# Alert notifications can include images, but rendering many images at the same time can overload the server
|
||||
# This limit will protect the server from render overloading and make sure notifications are sent out quickly
|
||||
concurrent_render_limit = 5
|
||||
|
||||
#################################### Explore #############################
|
||||
[explore]
|
||||
# Enable the Explore section
|
||||
enabled = false
|
||||
|
||||
#################################### Internal Grafana Metrics ############
|
||||
# Metrics available at HTTP API Url /metrics
|
||||
[metrics]
|
||||
enabled = true
|
||||
interval_seconds = 10
|
||||
|
||||
# Send internal Grafana metrics to graphite
|
||||
[metrics.graphite]
|
||||
# Enable by setting the address setting (ex localhost:2003)
|
||||
address =
|
||||
prefix = prod.grafana.%(instance_name)s.
|
||||
|
||||
[grafana_net]
|
||||
url = https://grafana.com
|
||||
|
||||
[grafana_com]
|
||||
url = https://grafana.com
|
||||
|
||||
#################################### Distributed tracing ############
|
||||
[tracing.jaeger]
|
||||
# jaeger destination (ex localhost:6831)
|
||||
address =
|
||||
# tag that will always be included in when creating new spans. ex (tag1:value1,tag2:value2)
|
||||
always_included_tag =
|
||||
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
|
||||
sampler_type = const
|
||||
# jaeger samplerconfig param
|
||||
# for "const" sampler, 0 or 1 for always false/true respectively
|
||||
# for "probabilistic" sampler, a probability between 0 and 1
|
||||
# for "rateLimiting" sampler, the number of spans per second
|
||||
# for "remote" sampler, param is the same as for "probabilistic"
|
||||
# and indicates the initial sampling rate before the actual one
|
||||
# is received from the mothership
|
||||
sampler_param = 1
|
||||
|
||||
#################################### External Image Storage ##############
|
||||
[external_image_storage]
|
||||
# You can choose between (s3, webdav, gcs, azure_blob, local)
|
||||
provider =
|
||||
|
||||
[external_image_storage.s3]
|
||||
bucket_url =
|
||||
bucket =
|
||||
region =
|
||||
path =
|
||||
access_key =
|
||||
secret_key =
|
||||
|
||||
[external_image_storage.webdav]
|
||||
url =
|
||||
username =
|
||||
password =
|
||||
public_url =
|
||||
|
||||
[external_image_storage.gcs]
|
||||
key_file =
|
||||
bucket =
|
||||
path =
|
||||
|
||||
[external_image_storage.azure_blob]
|
||||
account_name =
|
||||
account_key =
|
||||
container_name =
|
||||
|
||||
[external_image_storage.local]
|
||||
# does not require any configuration
|
||||
|
||||
[rendering]
|
||||
# Options to configure external image rendering server like https://github.com/grafana/grafana-image-renderer
|
||||
server_url =
|
||||
callback_url =
|
||||
@ -1,10 +0,0 @@
|
||||
# # config file version
|
||||
apiVersion: 1
|
||||
|
||||
providers:
|
||||
- name: 'default'
|
||||
orgId: 1
|
||||
folder: ''
|
||||
type: file
|
||||
options:
|
||||
path: /tmp/dashboards
|
||||
@ -1,53 +0,0 @@
|
||||
# # config file version
|
||||
apiVersion: 1
|
||||
|
||||
# # list of datasources that should be deleted from the database
|
||||
#deleteDatasources:
|
||||
# - name: Graphite
|
||||
# orgId: 1
|
||||
|
||||
# # list of datasources to insert/update depending
|
||||
# # on what's available in the datbase
|
||||
datasources:
|
||||
# <string, required> name of the datasource. Required
|
||||
- name: tidb-cluster
|
||||
# <string, required> datasource type. Required
|
||||
type: prometheus
|
||||
# <string, required> access mode. direct or proxy. Required
|
||||
access: proxy
|
||||
# # <int> org id. will default to orgId 1 if not specified
|
||||
# orgId: 1
|
||||
# <string> url
|
||||
url: http://prometheus:9090
|
||||
# # <string> database password, if used
|
||||
# password:
|
||||
# # <string> database user, if used
|
||||
# user:
|
||||
# # <string> database name, if used
|
||||
# database:
|
||||
# <bool> enable/disable basic auth
|
||||
basicAuth: false
|
||||
# # <string> basic auth username
|
||||
# basicAuthUser:
|
||||
# # <string> basic auth password
|
||||
# basicAuthPassword:
|
||||
# # <bool> enable/disable with credentials headers
|
||||
# withCredentials:
|
||||
# # <bool> mark as default datasource. Max one per org
|
||||
# isDefault:
|
||||
# # <map> fields that will be converted to json and stored in json_data
|
||||
# jsonData:
|
||||
# graphiteVersion: "1.1"
|
||||
# tlsAuth: true
|
||||
# tlsAuthWithCACert: true
|
||||
# httpHeaderName1: "Authorization"
|
||||
# # <string> json object of data that will be encrypted.
|
||||
# secureJsonData:
|
||||
# tlsCACert: "..."
|
||||
# tlsClientCert: "..."
|
||||
# tlsClientKey: "..."
|
||||
# # <openshift\kubernetes token example>
|
||||
# httpHeaderValue1: "Bearer xf5yhfkpsnmgo"
|
||||
# version: 1
|
||||
# # <bool> allow users to edit datasources from the UI.
|
||||
# editable: false
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,3 +0,0 @@
|
||||
[replication]
|
||||
enable-placement-rules = true
|
||||
max-replicas = 1
|
||||
@ -1,146 +0,0 @@
|
||||
groups:
|
||||
- name: alert.rules
|
||||
rules:
|
||||
- alert: PD_cluster_offline_tikv_nums
|
||||
expr: sum ( pd_cluster_status{type="store_down_count"} ) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: emergency
|
||||
expr: sum ( pd_cluster_status{type="store_down_count"} ) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_cluster_offline_tikv_nums
|
||||
|
||||
- alert: PD_etcd_write_disk_latency
|
||||
expr: histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[1m])) by (instance,job,le) ) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket[1m])) by (instance,job,le) ) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_etcd_write_disk_latency
|
||||
|
||||
- alert: PD_miss_peer_region_count
|
||||
expr: sum( pd_regions_status{type="miss_peer_region_count"} ) > 100
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: sum( pd_regions_status{type="miss_peer_region_count"} ) > 100
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_miss_peer_region_count
|
||||
|
||||
- alert: PD_cluster_lost_connect_tikv_nums
|
||||
expr: sum ( pd_cluster_status{type="store_disconnected_count"} ) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum ( pd_cluster_status{type="store_disconnected_count"} ) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_cluster_lost_connect_tikv_nums
|
||||
|
||||
- alert: PD_cluster_low_space
|
||||
expr: sum ( pd_cluster_status{type="store_low_space_count"} ) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum ( pd_cluster_status{type="store_low_space_count"} ) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_cluster_low_space
|
||||
|
||||
- alert: PD_etcd_network_peer_latency
|
||||
expr: histogram_quantile(0.99, sum(rate(etcd_network_peer_round_trip_time_seconds_bucket[1m])) by (To,instance,job,le) ) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: histogram_quantile(0.99, sum(rate(etcd_network_peer_round_trip_time_seconds_bucket[1m])) by (To,instance,job,le) ) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_etcd_network_peer_latency
|
||||
|
||||
- alert: PD_tidb_handle_requests_duration
|
||||
expr: histogram_quantile(0.99, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket{type="tso"}[1m])) by (instance,job,le) ) > 0.1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: histogram_quantile(0.99, sum(rate(pd_client_request_handle_requests_duration_seconds_bucket{type="tso"}[1m])) by (instance,job,le) ) > 0.1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_tidb_handle_requests_duration
|
||||
|
||||
- alert: PD_down_peer_region_nums
|
||||
expr: sum ( pd_regions_status{type="down_peer_region_count"} ) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum ( pd_regions_status{type="down_peer_region_count"} ) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_down_peer_region_nums
|
||||
|
||||
- alert: PD_incorrect_namespace_region_count
|
||||
expr: sum ( pd_regions_status{type="incorrect_namespace_region_count"} ) > 100
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum ( pd_regions_status{type="incorrect_namespace_region_count"} ) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_incorrect_namespace_region_count
|
||||
|
||||
- alert: PD_pending_peer_region_count
|
||||
expr: sum( pd_regions_status{type="pending_peer_region_count"} ) > 100
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum( pd_regions_status{type="pending_peer_region_count"} ) > 100
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_pending_peer_region_count
|
||||
|
||||
- alert: PD_leader_change
|
||||
expr: count( changes(pd_server_tso{type="save"}[10m]) > 0 ) >= 2
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: count( changes(pd_server_tso{type="save"}[10m]) > 0 ) >= 2
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: PD_leader_change
|
||||
|
||||
- alert: TiKV_space_used_more_than_80%
|
||||
expr: sum(pd_cluster_status{type="storage_size"}) / sum(pd_cluster_status{type="storage_capacity"}) * 100 > 80
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum(pd_cluster_status{type="storage_size"}) / sum(pd_cluster_status{type="storage_capacity"}) * 100 > 80
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV_space_used_more_than_80%
|
||||
@ -1,86 +0,0 @@
|
||||
# PD Configuration.
|
||||
|
||||
name = "pd"
|
||||
data-dir = "default.pd"
|
||||
|
||||
client-urls = "http://127.0.0.1:2379"
|
||||
# if not set, use ${client-urls}
|
||||
advertise-client-urls = ""
|
||||
|
||||
peer-urls = "http://127.0.0.1:2380"
|
||||
# if not set, use ${peer-urls}
|
||||
advertise-peer-urls = ""
|
||||
|
||||
initial-cluster = "pd=http://127.0.0.1:2380"
|
||||
initial-cluster-state = "new"
|
||||
|
||||
lease = 3
|
||||
tso-save-interval = "3s"
|
||||
|
||||
[security]
|
||||
# Path of file that contains list of trusted SSL CAs. if set, following four settings shouldn't be empty
|
||||
cacert-path = ""
|
||||
# Path of file that contains X509 certificate in PEM format.
|
||||
cert-path = ""
|
||||
# Path of file that contains X509 key in PEM format.
|
||||
key-path = ""
|
||||
|
||||
[log]
|
||||
level = "error"
|
||||
|
||||
# log format, one of json, text, console
|
||||
#format = "text"
|
||||
|
||||
# disable automatic timestamps in output
|
||||
#disable-timestamp = false
|
||||
|
||||
# file logging
|
||||
[log.file]
|
||||
#filename = ""
|
||||
# max log file size in MB
|
||||
#max-size = 300
|
||||
# max log file keep days
|
||||
#max-days = 28
|
||||
# maximum number of old log files to retain
|
||||
#max-backups = 7
|
||||
# rotate log by day
|
||||
#log-rotate = true
|
||||
|
||||
[metric]
|
||||
# prometheus client push interval, set "0s" to disable prometheus.
|
||||
interval = "15s"
|
||||
# prometheus pushgateway address, leaves it empty will disable prometheus.
|
||||
address = "pushgateway:9091"
|
||||
|
||||
[schedule]
|
||||
max-merge-region-size = 0
|
||||
split-merge-interval = "1h"
|
||||
max-snapshot-count = 3
|
||||
max-pending-peer-count = 16
|
||||
max-store-down-time = "30m"
|
||||
leader-schedule-limit = 4
|
||||
region-schedule-limit = 4
|
||||
replica-schedule-limit = 8
|
||||
merge-schedule-limit = 8
|
||||
tolerant-size-ratio = 5.0
|
||||
|
||||
# customized schedulers, the format is as below
|
||||
# if empty, it will use balance-leader, balance-region, hot-region as default
|
||||
# [[schedule.schedulers]]
|
||||
# type = "evict-leader"
|
||||
# args = ["1"]
|
||||
|
||||
[replication]
|
||||
# The number of replicas for each region.
|
||||
max-replicas = 3
|
||||
# The label keys specified the location of a store.
|
||||
# The placement priorities is implied by the order of label keys.
|
||||
# For example, ["zone", "rack"] means that we should place replicas to
|
||||
# different zones first, then to different racks if we don't have enough zones.
|
||||
location-labels = []
|
||||
|
||||
[label-property]
|
||||
# Do not assign region leaders to stores that have these tags.
|
||||
# [[label-property.reject-leader]]
|
||||
# key = "zone"
|
||||
# value = "cn1
|
||||
@ -1,15 +0,0 @@
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
evaluation_interval: 15s
|
||||
scrape_configs:
|
||||
- job_name: 'tidb-cluster'
|
||||
scrape_interval: 5s
|
||||
honor_labels: true
|
||||
static_configs:
|
||||
- targets: ['pushgateway:9091']
|
||||
labels:
|
||||
cluster: 'tidb-cluster'
|
||||
rule_files:
|
||||
- 'pd.rules.yml'
|
||||
- 'tikv.rules.yml'
|
||||
- 'tidb.rules.yml'
|
||||
@ -1,45 +0,0 @@
|
||||
# pump Configuration.
|
||||
|
||||
# addr(i.e. 'host:port') to listen on for client traffic
|
||||
addr = "127.0.0.1:8250"
|
||||
|
||||
# addr(i.e. 'host:port') to advertise to the public
|
||||
advertise-addr = ""
|
||||
|
||||
# a integer value to control expiry date of the binlog data, indicates for how long (in days) the binlog data would be stored.
|
||||
# must bigger than 0
|
||||
gc = 7
|
||||
|
||||
# path to the data directory of pump's data
|
||||
data-dir = "data.pump"
|
||||
|
||||
# number of seconds between heartbeat ticks (in 2 seconds)
|
||||
heartbeat-interval = 2
|
||||
|
||||
# a comma separated list of PD endpoints
|
||||
pd-urls = "http://127.0.0.1:2379"
|
||||
|
||||
#[security]
|
||||
# Path of file that contains list of trusted SSL CAs for connection with cluster components.
|
||||
# ssl-ca = "/path/to/ca.pem"
|
||||
# Path of file that contains X509 certificate in PEM format for connection with cluster components.
|
||||
# ssl-cert = "/path/to/drainer.pem"
|
||||
# Path of file that contains X509 key in PEM format for connection with cluster components.
|
||||
# ssl-key = "/path/to/drainer-key.pem"
|
||||
#
|
||||
# [storage]
|
||||
# Set to `true` (default) for best reliability, which prevents data loss when there is a power failure.
|
||||
# sync-log = true
|
||||
#
|
||||
# we suggest using the default config of the embedded LSM DB now, do not change it useless you know what you are doing
|
||||
# [storage.kv]
|
||||
# block-cache-capacity = 8388608
|
||||
# block-restart-interval = 16
|
||||
# block-size = 4096
|
||||
# compaction-L0-trigger = 8
|
||||
# compaction-table-size = 67108864
|
||||
# compaction-total-size = 536870912
|
||||
# compaction-total-size-multiplier = 8.0
|
||||
# write-buffer = 67108864
|
||||
# write-L0-pause-trigger = 24
|
||||
# write-L0-slowdown-trigger = 17
|
||||
@ -1,2 +0,0 @@
|
||||
spark.tispark.pd.addresses pd0:2379
|
||||
spark.sql.extensions org.apache.spark.sql.TiExtensions
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,134 +0,0 @@
|
||||
groups:
|
||||
- name: alert.rules
|
||||
rules:
|
||||
- alert: TiDB_schema_error
|
||||
expr: increase(tidb_session_schema_lease_error_total{type="outdated"}[15m]) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: emergency
|
||||
expr: increase(tidb_session_schema_lease_error_total{type="outdated"}[15m]) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB schema error
|
||||
|
||||
- alert: TiDB_tikvclient_region_err_total
|
||||
expr: increase( tidb_tikvclient_region_err_total[10m] ) > 6000
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: emergency
|
||||
expr: increase( tidb_tikvclient_region_err_total[10m] ) > 6000
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB tikvclient_backoff_count error
|
||||
|
||||
- alert: TiDB_domain_load_schema_total
|
||||
expr: increase( tidb_domain_load_schema_total{type="failed"}[10m] ) > 10
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: emergency
|
||||
expr: increase( tidb_domain_load_schema_total{type="failed"}[10m] ) > 10
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB domain_load_schema_total error
|
||||
|
||||
- alert: TiDB_monitor_keep_alive
|
||||
expr: increase(tidb_monitor_keep_alive_total{job="tidb"}[10m]) < 100
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: emergency
|
||||
expr: increase(tidb_monitor_keep_alive_total{job="tidb"}[10m]) < 100
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB monitor_keep_alive error
|
||||
|
||||
- alert: TiDB_server_panic_total
|
||||
expr: increase(tidb_server_panic_total[10m]) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: increase(tidb_server_panic_total[10m]) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB server panic total
|
||||
|
||||
- alert: TiDB_memory_abnormal
|
||||
expr: go_memstats_heap_inuse_bytes{job="tidb"} > 1e+10
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: go_memstats_heap_inuse_bytes{job="tidb"} > 1e+10
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB heap memory usage is over 10 GB
|
||||
|
||||
- alert: TiDB_query_duration
|
||||
expr: histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) BY (le, instance)) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: histogram_quantile(0.99, sum(rate(tidb_server_handle_query_duration_seconds_bucket[1m])) BY (le, instance)) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB query duration 99th percentile is above 1s
|
||||
|
||||
- alert: TiDB_server_event_error
|
||||
expr: increase(tidb_server_server_event{type=~"server_start|server_hang"}[15m]) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: increase(tidb_server_server_event{type=~"server_start|server_hang"}[15m]) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB server event error
|
||||
|
||||
- alert: TiDB_tikvclient_backoff_count
|
||||
expr: increase( tidb_tikvclient_backoff_count[10m] ) > 10
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: increase( tidb_tikvclient_backoff_count[10m] ) > 10
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB tikvclient_backoff_count error
|
||||
|
||||
- alert: TiDB_monitor_time_jump_back_error
|
||||
expr: increase(tidb_monitor_time_jump_back_total[10m]) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: increase(tidb_monitor_time_jump_back_total[10m]) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB monitor time_jump_back error
|
||||
|
||||
- alert: TiDB_ddl_waiting_jobs
|
||||
expr: sum(tidb_ddl_waiting_jobs) > 5
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum(tidb_ddl_waiting_jobs) > 5
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB ddl waiting_jobs too much
|
||||
@ -1,239 +0,0 @@
|
||||
# TiDB Configuration.
|
||||
|
||||
# TiDB server host.
|
||||
host = "0.0.0.0"
|
||||
|
||||
# TiDB server port.
|
||||
port = 4000
|
||||
|
||||
# Registered store name, [tikv, mocktikv]
|
||||
store = "mocktikv"
|
||||
|
||||
# TiDB storage path.
|
||||
path = "/tmp/tidb"
|
||||
|
||||
# The socket file to use for connection.
|
||||
socket = ""
|
||||
|
||||
# Run ddl worker on this tidb-server.
|
||||
run-ddl = true
|
||||
|
||||
# Schema lease duration, very dangerous to change only if you know what you do.
|
||||
lease = "0"
|
||||
|
||||
# When create table, split a separated region for it. It is recommended to
|
||||
# turn off this option if there will be a large number of tables created.
|
||||
split-table = true
|
||||
|
||||
# The limit of concurrent executed sessions.
|
||||
token-limit = 1000
|
||||
|
||||
# Only print a log when out of memory quota.
|
||||
# Valid options: ["log", "cancel"]
|
||||
oom-action = "log"
|
||||
|
||||
# Set the memory quota for a query in bytes. Default: 32GB
|
||||
mem-quota-query = 34359738368
|
||||
|
||||
# Enable coprocessor streaming.
|
||||
enable-streaming = false
|
||||
|
||||
# Set system variable 'lower_case_table_names'
|
||||
lower-case-table-names = 2
|
||||
|
||||
[log]
|
||||
# Log level: debug, info, warn, error, fatal.
|
||||
level = "error"
|
||||
|
||||
# Log format, one of json, text, console.
|
||||
format = "text"
|
||||
|
||||
# Disable automatic timestamp in output
|
||||
disable-timestamp = false
|
||||
|
||||
# Stores slow query log into separated files.
|
||||
slow-query-file = ""
|
||||
|
||||
# Queries with execution time greater than this value will be logged. (Milliseconds)
|
||||
slow-threshold = 300
|
||||
|
||||
# Queries with internal result greater than this value will be logged.
|
||||
expensive-threshold = 10000
|
||||
|
||||
# Maximum query length recorded in log.
|
||||
query-log-max-len = 2048
|
||||
|
||||
# File logging.
|
||||
[log.file]
|
||||
# Log file name.
|
||||
filename = ""
|
||||
|
||||
# Max log file size in MB (upper limit to 4096MB).
|
||||
max-size = 300
|
||||
|
||||
# Max log file keep days. No clean up by default.
|
||||
max-days = 0
|
||||
|
||||
# Maximum number of old log files to retain. No clean up by default.
|
||||
max-backups = 0
|
||||
|
||||
# Rotate log by day
|
||||
log-rotate = true
|
||||
|
||||
[security]
|
||||
# Path of file that contains list of trusted SSL CAs for connection with mysql client.
|
||||
ssl-ca = ""
|
||||
|
||||
# Path of file that contains X509 certificate in PEM format for connection with mysql client.
|
||||
ssl-cert = ""
|
||||
|
||||
# Path of file that contains X509 key in PEM format for connection with mysql client.
|
||||
ssl-key = ""
|
||||
|
||||
# Path of file that contains list of trusted SSL CAs for connection with cluster components.
|
||||
cluster-ssl-ca = ""
|
||||
|
||||
# Path of file that contains X509 certificate in PEM format for connection with cluster components.
|
||||
cluster-ssl-cert = ""
|
||||
|
||||
# Path of file that contains X509 key in PEM format for connection with cluster components.
|
||||
cluster-ssl-key = ""
|
||||
|
||||
[status]
|
||||
# If enable status report HTTP service.
|
||||
report-status = true
|
||||
|
||||
# TiDB status port.
|
||||
status-port = 10080
|
||||
|
||||
# Prometheus pushgateway address, leaves it empty will disable prometheus push.
|
||||
metrics-addr = "pushgateway:9091"
|
||||
|
||||
# Prometheus client push interval in second, set \"0\" to disable prometheus push.
|
||||
metrics-interval = 15
|
||||
|
||||
[performance]
|
||||
# Max CPUs to use, 0 use number of CPUs in the machine.
|
||||
max-procs = 0
|
||||
# StmtCountLimit limits the max count of statement inside a transaction.
|
||||
stmt-count-limit = 5000
|
||||
|
||||
# Set keep alive option for tcp connection.
|
||||
tcp-keep-alive = true
|
||||
|
||||
# The maximum number of retries when commit a transaction.
|
||||
retry-limit = 10
|
||||
|
||||
# Whether support cartesian product.
|
||||
cross-join = true
|
||||
|
||||
# Stats lease duration, which influences the time of analyze and stats load.
|
||||
stats-lease = "3s"
|
||||
|
||||
# Run auto analyze worker on this tidb-server.
|
||||
run-auto-analyze = true
|
||||
|
||||
# Probability to use the query feedback to update stats, 0 or 1 for always false/true.
|
||||
feedback-probability = 0.0
|
||||
|
||||
# The max number of query feedback that cache in memory.
|
||||
query-feedback-limit = 1024
|
||||
|
||||
# Pseudo stats will be used if the ratio between the modify count and
|
||||
# row count in statistics of a table is greater than it.
|
||||
pseudo-estimate-ratio = 0.7
|
||||
|
||||
[proxy-protocol]
|
||||
# PROXY protocol acceptable client networks.
|
||||
# Empty string means disable PROXY protocol, * means all networks.
|
||||
networks = ""
|
||||
|
||||
# PROXY protocol header read timeout, unit is second
|
||||
header-timeout = 5
|
||||
|
||||
[plan-cache]
|
||||
enabled = false
|
||||
capacity = 2560
|
||||
shards = 256
|
||||
|
||||
[prepared-plan-cache]
|
||||
enabled = false
|
||||
capacity = 100
|
||||
|
||||
[opentracing]
|
||||
# Enable opentracing.
|
||||
enable = false
|
||||
|
||||
# Whether to enable the rpc metrics.
|
||||
rpc-metrics = false
|
||||
|
||||
[opentracing.sampler]
|
||||
# Type specifies the type of the sampler: const, probabilistic, rateLimiting, or remote
|
||||
type = "const"
|
||||
|
||||
# Param is a value passed to the sampler.
|
||||
# Valid values for Param field are:
|
||||
# - for "const" sampler, 0 or 1 for always false/true respectively
|
||||
# - for "probabilistic" sampler, a probability between 0 and 1
|
||||
# - for "rateLimiting" sampler, the number of spans per second
|
||||
# - for "remote" sampler, param is the same as for "probabilistic"
|
||||
# and indicates the initial sampling rate before the actual one
|
||||
# is received from the mothership
|
||||
param = 1.0
|
||||
|
||||
# SamplingServerURL is the address of jaeger-agent's HTTP sampling server
|
||||
sampling-server-url = ""
|
||||
|
||||
# MaxOperations is the maximum number of operations that the sampler
|
||||
# will keep track of. If an operation is not tracked, a default probabilistic
|
||||
# sampler will be used rather than the per operation specific sampler.
|
||||
max-operations = 0
|
||||
|
||||
# SamplingRefreshInterval controls how often the remotely controlled sampler will poll
|
||||
# jaeger-agent for the appropriate sampling strategy.
|
||||
sampling-refresh-interval = 0
|
||||
|
||||
[opentracing.reporter]
|
||||
# QueueSize controls how many spans the reporter can keep in memory before it starts dropping
|
||||
# new spans. The queue is continuously drained by a background go-routine, as fast as spans
|
||||
# can be sent out of process.
|
||||
queue-size = 0
|
||||
|
||||
# BufferFlushInterval controls how often the buffer is force-flushed, even if it's not full.
|
||||
# It is generally not useful, as it only matters for very low traffic services.
|
||||
buffer-flush-interval = 0
|
||||
|
||||
# LogSpans, when true, enables LoggingReporter that runs in parallel with the main reporter
|
||||
# and logs all submitted spans. Main Configuration.Logger must be initialized in the code
|
||||
# for this option to have any effect.
|
||||
log-spans = false
|
||||
|
||||
# LocalAgentHostPort instructs reporter to send spans to jaeger-agent at this address
|
||||
local-agent-host-port = ""
|
||||
|
||||
[tikv-client]
|
||||
# Max gRPC connections that will be established with each tikv-server.
|
||||
grpc-connection-count = 16
|
||||
|
||||
# After a duration of this time in seconds if the client doesn't see any activity it pings
|
||||
# the server to see if the transport is still alive.
|
||||
grpc-keepalive-time = 10
|
||||
|
||||
# After having pinged for keepalive check, the client waits for a duration of Timeout in seconds
|
||||
# and if no activity is seen even after that the connection is closed.
|
||||
grpc-keepalive-timeout = 3
|
||||
|
||||
# max time for commit command, must be twice bigger than raft election timeout.
|
||||
commit-timeout = "41s"
|
||||
|
||||
[binlog]
|
||||
|
||||
# Socket file to write binlog.
|
||||
binlog-socket = ""
|
||||
|
||||
# WriteTimeout specifies how long it will wait for writing binlog to pump.
|
||||
write-timeout = "15s"
|
||||
|
||||
# If IgnoreError is true, when writting binlog meets error, TiDB would stop writting binlog,
|
||||
# but still provide service.
|
||||
ignore-error = false
|
||||
@ -1,45 +0,0 @@
|
||||
log-file = "/logs/tiflash_tikv.log"
|
||||
|
||||
[readpool]
|
||||
|
||||
[readpool.coprocessor]
|
||||
|
||||
[readpool.storage]
|
||||
|
||||
[server]
|
||||
engine-addr = "tiflash:4030"
|
||||
addr = "0.0.0.0:20280"
|
||||
advertise-addr = "tiflash:20280"
|
||||
#status-addr = "tiflash:20292"
|
||||
|
||||
[storage]
|
||||
data-dir = "/data/flash"
|
||||
|
||||
[pd]
|
||||
|
||||
[metric]
|
||||
|
||||
[raftstore]
|
||||
capacity = "10GB"
|
||||
|
||||
[coprocessor]
|
||||
|
||||
[rocksdb]
|
||||
wal-dir = ""
|
||||
|
||||
[rocksdb.defaultcf]
|
||||
|
||||
[rocksdb.lockcf]
|
||||
|
||||
[rocksdb.writecf]
|
||||
|
||||
[raftdb]
|
||||
|
||||
[raftdb.defaultcf]
|
||||
|
||||
[security]
|
||||
ca-path = ""
|
||||
cert-path = ""
|
||||
key-path = ""
|
||||
|
||||
[import]
|
||||
@ -1,79 +0,0 @@
|
||||
default_profile = "default"
|
||||
display_name = "TiFlash"
|
||||
listen_host = "0.0.0.0"
|
||||
mark_cache_size = 5368709120
|
||||
tmp_path = "/data/tmp"
|
||||
path = "/data"
|
||||
tcp_port = 9110
|
||||
http_port = 8223
|
||||
|
||||
[flash]
|
||||
tidb_status_addr = "tidb:10080"
|
||||
service_addr = "tiflash:4030"
|
||||
|
||||
[flash.flash_cluster]
|
||||
cluster_manager_path = "/tiflash/flash_cluster_manager"
|
||||
log = "/logs/tiflash_cluster_manager.log"
|
||||
master_ttl = 60
|
||||
refresh_interval = 20
|
||||
update_rule_interval = 5
|
||||
|
||||
[flash.proxy]
|
||||
config = "/tiflash-learner.toml"
|
||||
|
||||
[status]
|
||||
metrics_port = 8234
|
||||
|
||||
[logger]
|
||||
errorlog = "/logs/tiflash_error.log"
|
||||
log = "/logs/tiflash.log"
|
||||
count = 20
|
||||
level = "debug"
|
||||
size = "1000M"
|
||||
|
||||
[application]
|
||||
runAsDaemon = true
|
||||
|
||||
[raft]
|
||||
pd_addr = "pd0:2379"
|
||||
storage_engine = "tmt"
|
||||
|
||||
[quotas]
|
||||
|
||||
[quotas.default]
|
||||
|
||||
[quotas.default.interval]
|
||||
duration = 3600
|
||||
errors = 0
|
||||
execution_time = 0
|
||||
queries = 0
|
||||
read_rows = 0
|
||||
result_rows = 0
|
||||
|
||||
[users]
|
||||
|
||||
[users.default]
|
||||
password = ""
|
||||
profile = "default"
|
||||
quota = "default"
|
||||
|
||||
[users.default.networks]
|
||||
ip = "::/0"
|
||||
|
||||
[users.readonly]
|
||||
password = ""
|
||||
profile = "readonly"
|
||||
quota = "default"
|
||||
|
||||
[users.readonly.networks]
|
||||
ip = "::/0"
|
||||
|
||||
[profiles]
|
||||
|
||||
[profiles.default]
|
||||
load_balancing = "random"
|
||||
max_memory_usage = 10000000000
|
||||
use_uncompressed_cache = 0
|
||||
|
||||
[profiles.readonly]
|
||||
readonly = 1
|
||||
File diff suppressed because it is too large
Load Diff
@ -1,350 +0,0 @@
|
||||
groups:
|
||||
- name: alert.rules
|
||||
rules:
|
||||
- alert: TiKV_memory_used_too_fast
|
||||
expr: process_resident_memory_bytes{job=~"tikv.*"} - (process_resident_memory_bytes{job=~"tikv.*"} offset 5m) > 5*1024*1024*1024
|
||||
for: 5m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: emergency
|
||||
expr: process_resident_memory_bytes{job=~"tikv.*"} - (process_resident_memory_bytes{job=~"tikv.*"} offset 5m) > 5*1024*1024*1024
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, job: {{ $labels.job }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV memory used too fast
|
||||
|
||||
- alert: TiKV_GC_can_not_work
|
||||
expr: sum(increase(tidb_tikvclient_gc_action_result{type="success"}[6h])) < 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: emergency
|
||||
expr: sum(increase(tidb_tikvclient_gc_action_result{type="success"}[6h])) < 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV GC can not work
|
||||
|
||||
- alert: TiKV_server_report_failure_msg_total
|
||||
expr: sum(rate(tikv_server_report_failure_msg_total{type="unreachable"}[10m])) BY (store_id) > 10
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: sum(rate(tikv_server_report_failure_msg_total{type="unreachable"}[10m])) BY (store_id) > 10
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV server_report_failure_msg_total error
|
||||
|
||||
- alert: TiKV_channel_full_total
|
||||
expr: sum(rate(tikv_channel_full_total[10m])) BY (type, instance) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: sum(rate(tikv_channel_full_total[10m])) BY (type, instance) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV channel full
|
||||
|
||||
- alert: TiKV_write_stall
|
||||
expr: delta( tikv_engine_write_stall[10m]) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: delta( tikv_engine_write_stall[10m]) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV write stall
|
||||
|
||||
- alert: TiKV_raft_log_lag
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_log_lag_bucket[1m])) by (le, instance, job)) > 5000
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_log_lag_bucket[1m])) by (le, instance, job)) > 5000
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV raftstore log lag more than 5000
|
||||
|
||||
- alert: TiKV_async_request_snapshot_duration_seconds
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type="snapshot"}[1m])) by (le, instance, job,type)) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type="snapshot"}[1m])) by (le, instance, job,type)) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV async request snapshot duration seconds more than 1s
|
||||
|
||||
- alert: TiKV_async_request_write_duration_seconds
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type="write"}[1m])) by (le, instance, job,type)) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_storage_engine_async_request_duration_seconds_bucket{type="write"}[1m])) by (le, instance, job,type)) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV async request write duration seconds more than 1s
|
||||
|
||||
- alert: TiKV_coprocessor_request_wait_seconds
|
||||
expr: histogram_quantile(0.9999, sum(rate(tikv_coprocessor_request_wait_seconds_bucket[1m])) by (le, instance, job,req)) > 10
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.9999, sum(rate(tikv_coprocessor_request_wait_seconds_bucket[1m])) by (le, instance, job,req)) > 10
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV coprocessor request wait seconds more than 10s
|
||||
|
||||
- alert: TiKV_raftstore_thread_cpu_seconds_total
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name=~"raftstore_.*"}[1m])) by (job, name) > 0.8
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name=~"raftstore_.*"}[1m])) by (job, name) > 0.8
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV raftstore thread CPU seconds is high
|
||||
|
||||
- alert: TiKV_raft_append_log_duration_secs
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le, instance, job)) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_append_log_duration_seconds_bucket[1m])) by (le, instance, job)) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV_raft_append_log_duration_secs
|
||||
|
||||
- alert: TiKV_raft_apply_log_duration_secs
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le, instance, job)) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_apply_log_duration_seconds_bucket[1m])) by (le, instance, job)) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV_raft_apply_log_duration_secs
|
||||
|
||||
- alert: TiKV_scheduler_latch_wait_duration_seconds
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_scheduler_latch_wait_duration_seconds_bucket[1m])) by (le, instance, job,type)) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_scheduler_latch_wait_duration_seconds_bucket[1m])) by (le, instance, job,type)) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV scheduler latch wait duration seconds more than 1s
|
||||
|
||||
- alert: TiKV_thread_apply_worker_cpu_seconds
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name="apply_worker"}[1m])) by (job) > 0.9
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name="apply_worker"}[1m])) by (job) > 0.9
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV thread apply worker cpu seconds is high
|
||||
|
||||
- alert: TiDB_tikvclient_gc_action_fail
|
||||
expr: sum(increase(tidb_tikvclient_gc_action_result{type="fail"}[1m])) > 10
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: critical
|
||||
expr: sum(increase(tidb_tikvclient_gc_action_result{type="fail"}[1m])) > 10
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiDB_tikvclient_gc_action_fail
|
||||
|
||||
- alert: TiKV_leader_drops
|
||||
expr: delta(tikv_pd_heartbeat_tick_total{type="leader"}[30s]) < -10
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: delta(tikv_pd_heartbeat_tick_total{type="leader"}[30s]) < -10
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV leader drops
|
||||
|
||||
- alert: TiKV_raft_process_ready_duration_secs
|
||||
expr: histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le, instance, job,type)) > 2
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='ready'}[1m])) by (le, instance, job,type)) > 2
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV_raft_process_ready_duration_secs
|
||||
|
||||
- alert: TiKV_raft_process_tick_duration_secs
|
||||
expr: histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le, instance, job,type)) > 2
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: histogram_quantile(0.999, sum(rate(tikv_raftstore_raft_process_duration_secs_bucket{type='tick'}[1m])) by (le, instance, job,type)) > 2
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV_raft_process_tick_duration_secs
|
||||
|
||||
- alert: TiKV_scheduler_context_total
|
||||
expr: abs(delta( tikv_scheduler_contex_total[5m])) > 1000
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: abs(delta( tikv_scheduler_contex_total[5m])) > 1000
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV scheduler context total
|
||||
|
||||
- alert: TiKV_scheduler_command_duration_seconds
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_scheduler_command_duration_seconds_bucket[1m])) by (le, instance, job,type) / 1000) > 1
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_scheduler_command_duration_seconds_bucket[1m])) by (le, instance, job,type) / 1000) > 1
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV scheduler command duration seconds more than 1s
|
||||
|
||||
- alert: TiKV_thread_storage_scheduler_cpu_seconds
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name=~"storage_schedul.*"}[1m])) by (job) > 0.8
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name=~"storage_schedul.*"}[1m])) by (job) > 0.8
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values:{{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV storage scheduler cpu seconds more than 80%
|
||||
|
||||
- alert: TiKV_coprocessor_outdated_request_wait_seconds
|
||||
expr: delta( tikv_coprocessor_outdated_request_wait_seconds_count[10m] ) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: delta( tikv_coprocessor_outdated_request_wait_seconds_count[10m] ) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV coprocessor outdated request wait seconds
|
||||
|
||||
- alert: TiKV_coprocessor_request_error
|
||||
expr: increase(tikv_coprocessor_request_error{reason!="lock"}[10m]) > 100
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: increase(tikv_coprocessor_request_error{reason!="lock"}[10m]) > 100
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, reason: {{ $labels.reason }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV coprocessor request error
|
||||
|
||||
- alert: TiKV_coprocessor_request_lock_error
|
||||
expr: increase(tikv_coprocessor_request_error{reason="lock"}[10m]) > 10000
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: increase(tikv_coprocessor_request_error{reason="lock"}[10m]) > 10000
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, reason: {{ $labels.reason }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV coprocessor request lock error
|
||||
|
||||
- alert: TiKV_coprocessor_pending_request
|
||||
expr: delta( tikv_coprocessor_pending_request[10m]) > 5000
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: delta( tikv_coprocessor_pending_request[10m]) > 5000
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV pending {{ $labels.type }} request is high
|
||||
|
||||
- alert: TiKV_batch_request_snapshot_nums
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name=~"endpoint.*"}[1m])) by (job) / ( count(tikv_thread_cpu_seconds_total{name=~"endpoint.*"}) * 0.9 ) / count(count(tikv_thread_cpu_seconds_total) by (instance)) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum(rate(tikv_thread_cpu_seconds_total{name=~"endpoint.*"}[1m])) by (job) / ( count(tikv_thread_cpu_seconds_total{name=~"endpoint.*"}) * 0.9 ) / count(count(tikv_thread_cpu_seconds_total) by (instance)) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV batch request snapshot nums is high
|
||||
|
||||
- alert: TiKV_pending_task
|
||||
expr: sum(tikv_worker_pending_task_total) BY (job,instance,name) > 1000
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: sum(tikv_worker_pending_task_total) BY (job,instance,name) > 1000
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV pending task too much
|
||||
|
||||
- alert: TiKV_low_space_and_add_region
|
||||
expr: count( (sum(tikv_store_size_bytes{type="available"}) by (job) / sum(tikv_store_size_bytes{type="capacity"}) by (job) < 0.2) and (sum(tikv_raftstore_snapshot_traffic_total{type="applying"}) by (job) > 0 ) ) > 0
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: count( (sum(tikv_store_size_bytes{type="available"}) by (job) / sum(tikv_store_size_bytes{type="capacity"}) by (job) < 0.2) and (sum(tikv_raftstore_snapshot_traffic_total{type="applying"}) by (job) > 0 ) ) > 0
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV low_space and add_region
|
||||
|
||||
- alert: TiKV_approximate_region_size
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le)) > 1073741824
|
||||
for: 1m
|
||||
labels:
|
||||
env: test-cluster
|
||||
level: warning
|
||||
expr: histogram_quantile(0.99, sum(rate(tikv_raftstore_region_size_bucket[1m])) by (le)) > 1073741824
|
||||
annotations:
|
||||
description: 'cluster: test-cluster, type: {{ $labels.type }}, instance: {{ $labels.instance }}, values: {{ $value }}'
|
||||
value: '{{ $value }}'
|
||||
summary: TiKV approximate region size is more than 1GB
|
||||
@ -1,497 +0,0 @@
|
||||
# TiKV config template
|
||||
# Human-readable big numbers:
|
||||
# File size(based on byte): KB, MB, GB, TB, PB
|
||||
# e.g.: 1_048_576 = "1MB"
|
||||
# Time(based on ms): ms, s, m, h
|
||||
# e.g.: 78_000 = "1.3m"
|
||||
|
||||
# log level: trace, debug, info, warn, error, off.
|
||||
log-level = "error"
|
||||
# file to store log, write to stderr if it's empty.
|
||||
# log-file = ""
|
||||
|
||||
[readpool.storage]
|
||||
# size of thread pool for high-priority operations
|
||||
# high-concurrency = 4
|
||||
# size of thread pool for normal-priority operations
|
||||
# normal-concurrency = 4
|
||||
# size of thread pool for low-priority operations
|
||||
# low-concurrency = 4
|
||||
# max running high-priority operations, reject if exceed
|
||||
# max-tasks-high = 8000
|
||||
# max running normal-priority operations, reject if exceed
|
||||
# max-tasks-normal = 8000
|
||||
# max running low-priority operations, reject if exceed
|
||||
# max-tasks-low = 8000
|
||||
# size of stack size for each thread pool
|
||||
# stack-size = "10MB"
|
||||
|
||||
[readpool.coprocessor]
|
||||
# Notice: if CPU_NUM > 8, default thread pool size for coprocessors
|
||||
# will be set to CPU_NUM * 0.8.
|
||||
|
||||
# high-concurrency = 8
|
||||
# normal-concurrency = 8
|
||||
# low-concurrency = 8
|
||||
# max-tasks-high = 16000
|
||||
# max-tasks-normal = 16000
|
||||
# max-tasks-low = 16000
|
||||
# stack-size = "10MB"
|
||||
|
||||
[server]
|
||||
# set listening address.
|
||||
# addr = "127.0.0.1:20160"
|
||||
# set advertise listening address for client communication, if not set, use addr instead.
|
||||
# advertise-addr = ""
|
||||
# notify capacity, 40960 is suitable for about 7000 regions.
|
||||
# notify-capacity = 40960
|
||||
# maximum number of messages can be processed in one tick.
|
||||
# messages-per-tick = 4096
|
||||
|
||||
# compression type for grpc channel, available values are no, deflate and gzip.
|
||||
# grpc-compression-type = "no"
|
||||
# size of thread pool for grpc server.
|
||||
# grpc-concurrency = 4
|
||||
# The number of max concurrent streams/requests on a client connection.
|
||||
# grpc-concurrent-stream = 1024
|
||||
# The number of connections with each tikv server to send raft messages.
|
||||
# grpc-raft-conn-num = 10
|
||||
# Amount to read ahead on individual grpc streams.
|
||||
# grpc-stream-initial-window-size = "2MB"
|
||||
|
||||
# How many snapshots can be sent concurrently.
|
||||
# concurrent-send-snap-limit = 32
|
||||
# How many snapshots can be recv concurrently.
|
||||
# concurrent-recv-snap-limit = 32
|
||||
|
||||
# max count of tasks being handled, new tasks will be rejected.
|
||||
# end-point-max-tasks = 2000
|
||||
|
||||
# max recursion level allowed when decoding dag expression
|
||||
# end-point-recursion-limit = 1000
|
||||
|
||||
# max time to handle coprocessor request before timeout
|
||||
# end-point-request-max-handle-duration = "60s"
|
||||
|
||||
# the max bytes that snapshot can be written to disk in one second,
|
||||
# should be set based on your disk performance
|
||||
# snap-max-write-bytes-per-sec = "100MB"
|
||||
|
||||
# set attributes about this server, e.g. { zone = "us-west-1", disk = "ssd" }.
|
||||
# labels = {}
|
||||
|
||||
[storage]
|
||||
# set the path to rocksdb directory.
|
||||
# data-dir = "/tmp/tikv/store"
|
||||
|
||||
# notify capacity of scheduler's channel
|
||||
# scheduler-notify-capacity = 10240
|
||||
|
||||
# maximum number of messages can be processed in one tick
|
||||
# scheduler-messages-per-tick = 1024
|
||||
|
||||
# the number of slots in scheduler latches, concurrency control for write.
|
||||
# scheduler-concurrency = 2048000
|
||||
|
||||
# scheduler's worker pool size, should increase it in heavy write cases,
|
||||
# also should less than total cpu cores.
|
||||
# scheduler-worker-pool-size = 4
|
||||
|
||||
# When the pending write bytes exceeds this threshold,
|
||||
# the "scheduler too busy" error is displayed.
|
||||
# scheduler-pending-write-threshold = "100MB"
|
||||
|
||||
[pd]
|
||||
# pd endpoints
|
||||
# endpoints = []
|
||||
|
||||
[metric]
|
||||
# the Prometheus client push interval. Setting the value to 0s stops Prometheus client from pushing.
|
||||
# interval = "15s"
|
||||
# the Prometheus pushgateway address. Leaving it empty stops Prometheus client from pushing.
|
||||
address = "pushgateway:9091"
|
||||
# the Prometheus client push job name. Note: A node id will automatically append, e.g., "tikv_1".
|
||||
# job = "tikv"
|
||||
|
||||
[raftstore]
|
||||
# true (default value) for high reliability, this can prevent data loss when power failure.
|
||||
# sync-log = true
|
||||
|
||||
# set the path to raftdb directory, default value is data-dir/raft
|
||||
# raftdb-path = ""
|
||||
|
||||
# set store capacity, if no set, use disk capacity.
|
||||
# capacity = 0
|
||||
|
||||
# notify capacity, 40960 is suitable for about 7000 regions.
|
||||
# notify-capacity = 40960
|
||||
|
||||
# maximum number of messages can be processed in one tick.
|
||||
# messages-per-tick = 4096
|
||||
|
||||
# Region heartbeat tick interval for reporting to pd.
|
||||
# pd-heartbeat-tick-interval = "60s"
|
||||
# Store heartbeat tick interval for reporting to pd.
|
||||
# pd-store-heartbeat-tick-interval = "10s"
|
||||
|
||||
# When region size changes exceeds region-split-check-diff, we should check
|
||||
# whether the region should be split or not.
|
||||
# region-split-check-diff = "6MB"
|
||||
|
||||
# Interval to check region whether need to be split or not.
|
||||
# split-region-check-tick-interval = "10s"
|
||||
|
||||
# When raft entry exceed the max size, reject to propose the entry.
|
||||
# raft-entry-max-size = "8MB"
|
||||
|
||||
# Interval to gc unnecessary raft log.
|
||||
# raft-log-gc-tick-interval = "10s"
|
||||
# A threshold to gc stale raft log, must >= 1.
|
||||
# raft-log-gc-threshold = 50
|
||||
# When entry count exceed this value, gc will be forced trigger.
|
||||
# raft-log-gc-count-limit = 72000
|
||||
# When the approximate size of raft log entries exceed this value, gc will be forced trigger.
|
||||
# It's recommanded to set it to 3/4 of region-split-size.
|
||||
# raft-log-gc-size-limit = "72MB"
|
||||
|
||||
# When a peer hasn't been active for max-peer-down-duration,
|
||||
# we will consider this peer to be down and report it to pd.
|
||||
# max-peer-down-duration = "5m"
|
||||
|
||||
# Interval to check whether start manual compaction for a region,
|
||||
# region-compact-check-interval = "5m"
|
||||
# Number of regions for each time to check.
|
||||
# region-compact-check-step = 100
|
||||
# The minimum number of delete tombstones to trigger manual compaction.
|
||||
# region-compact-min-tombstones = 10000
|
||||
# Interval to check whether should start a manual compaction for lock column family,
|
||||
# if written bytes reach lock-cf-compact-threshold for lock column family, will fire
|
||||
# a manual compaction for lock column family.
|
||||
# lock-cf-compact-interval = "10m"
|
||||
# lock-cf-compact-bytes-threshold = "256MB"
|
||||
|
||||
# Interval (s) to check region whether the data are consistent.
|
||||
# consistency-check-interval = 0
|
||||
|
||||
# Use delete range to drop a large number of continuous keys.
|
||||
# use-delete-range = false
|
||||
|
||||
# delay time before deleting a stale peer
|
||||
# clean-stale-peer-delay = "10m"
|
||||
|
||||
# Interval to cleanup import sst files.
|
||||
# cleanup-import-sst-interval = "10m"
|
||||
|
||||
[coprocessor]
|
||||
# When it is true, it will try to split a region with table prefix if
|
||||
# that region crosses tables. It is recommended to turn off this option
|
||||
# if there will be a large number of tables created.
|
||||
# split-region-on-table = true
|
||||
# When the region's size exceeds region-max-size, we will split the region
|
||||
# into two which the left region's size will be region-split-size or a little
|
||||
# bit smaller.
|
||||
# region-max-size = "144MB"
|
||||
# region-split-size = "96MB"
|
||||
|
||||
[rocksdb]
|
||||
# Maximum number of concurrent background jobs (compactions and flushes)
|
||||
# max-background-jobs = 8
|
||||
|
||||
# This value represents the maximum number of threads that will concurrently perform a
|
||||
# compaction job by breaking it into multiple, smaller ones that are run simultaneously.
|
||||
# Default: 1 (i.e. no subcompactions)
|
||||
# max-sub-compactions = 1
|
||||
|
||||
# Number of open files that can be used by the DB. You may need to
|
||||
# increase this if your database has a large working set. Value -1 means
|
||||
# files opened are always kept open. You can estimate number of files based
|
||||
# on target_file_size_base and target_file_size_multiplier for level-based
|
||||
# compaction.
|
||||
# If max-open-files = -1, RocksDB will prefetch index and filter blocks into
|
||||
# block cache at startup, so if your database has a large working set, it will
|
||||
# take several minutes to open the db.
|
||||
max-open-files = 1024
|
||||
|
||||
# Max size of rocksdb's MANIFEST file.
|
||||
# For detailed explanation please refer to https://github.com/facebook/rocksdb/wiki/MANIFEST
|
||||
# max-manifest-file-size = "20MB"
|
||||
|
||||
# If true, the database will be created if it is missing.
|
||||
# create-if-missing = true
|
||||
|
||||
# rocksdb wal recovery mode
|
||||
# 0 : TolerateCorruptedTailRecords, tolerate incomplete record in trailing data on all logs;
|
||||
# 1 : AbsoluteConsistency, We don't expect to find any corruption in the WAL;
|
||||
# 2 : PointInTimeRecovery, Recover to point-in-time consistency;
|
||||
# 3 : SkipAnyCorruptedRecords, Recovery after a disaster;
|
||||
# wal-recovery-mode = 2
|
||||
|
||||
# rocksdb write-ahead logs dir path
|
||||
# This specifies the absolute dir path for write-ahead logs (WAL).
|
||||
# If it is empty, the log files will be in the same dir as data.
|
||||
# When you set the path to rocksdb directory in memory like in /dev/shm, you may want to set
|
||||
# wal-dir to a directory on a persistent storage.
|
||||
# See https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database
|
||||
# wal-dir = "/tmp/tikv/store"
|
||||
|
||||
# The following two fields affect how archived write-ahead logs will be deleted.
|
||||
# 1. If both set to 0, logs will be deleted asap and will not get into the archive.
|
||||
# 2. If wal-ttl-seconds is 0 and wal-size-limit is not 0,
|
||||
# WAL files will be checked every 10 min and if total size is greater
|
||||
# then wal-size-limit, they will be deleted starting with the
|
||||
# earliest until size_limit is met. All empty files will be deleted.
|
||||
# 3. If wal-ttl-seconds is not 0 and wal-size-limit is 0, then
|
||||
# WAL files will be checked every wal-ttl-seconds / 2 and those that
|
||||
# are older than wal-ttl-seconds will be deleted.
|
||||
# 4. If both are not 0, WAL files will be checked every 10 min and both
|
||||
# checks will be performed with ttl being first.
|
||||
# When you set the path to rocksdb directory in memory like in /dev/shm, you may want to set
|
||||
# wal-ttl-seconds to a value greater than 0 (like 86400) and backup your db on a regular basis.
|
||||
# See https://github.com/facebook/rocksdb/wiki/How-to-persist-in-memory-RocksDB-database
|
||||
# wal-ttl-seconds = 0
|
||||
# wal-size-limit = 0
|
||||
|
||||
# rocksdb max total wal size
|
||||
# max-total-wal-size = "4GB"
|
||||
|
||||
# Rocksdb Statistics provides cumulative stats over time.
|
||||
# Turn statistics on will introduce about 5%-10% overhead for RocksDB,
|
||||
# but it is worthy to know the internal status of RocksDB.
|
||||
# enable-statistics = true
|
||||
|
||||
# Dump statistics periodically in information logs.
|
||||
# Same as rocksdb's default value (10 min).
|
||||
# stats-dump-period = "10m"
|
||||
|
||||
# Due to Rocksdb FAQ: https://github.com/facebook/rocksdb/wiki/RocksDB-FAQ,
|
||||
# If you want to use rocksdb on multi disks or spinning disks, you should set value at
|
||||
# least 2MB;
|
||||
# compaction-readahead-size = 0
|
||||
|
||||
# This is the maximum buffer size that is used by WritableFileWrite
|
||||
# writable-file-max-buffer-size = "1MB"
|
||||
|
||||
# Use O_DIRECT for both reads and writes in background flush and compactions
|
||||
# use-direct-io-for-flush-and-compaction = false
|
||||
|
||||
# Limit the disk IO of compaction and flush. Compaction and flush can cause
|
||||
# terrible spikes if they exceed a certain threshold. Consider setting this to
|
||||
# 50% ~ 80% of the disk throughput for a more stable result. However, in heavy
|
||||
# write workload, limiting compaction and flush speed can cause write stalls too.
|
||||
# rate-bytes-per-sec = 0
|
||||
|
||||
# Enable or disable the pipelined write
|
||||
# enable-pipelined-write = true
|
||||
|
||||
# Allows OS to incrementally sync files to disk while they are being
|
||||
# written, asynchronously, in the background.
|
||||
# bytes-per-sync = "0MB"
|
||||
|
||||
# Allows OS to incrementally sync WAL to disk while it is being written.
|
||||
# wal-bytes-per-sync = "0KB"
|
||||
|
||||
# Specify the maximal size of the Rocksdb info log file. If the log file
|
||||
# is larger than `max_log_file_size`, a new info log file will be created.
|
||||
# If max_log_file_size == 0, all logs will be written to one log file.
|
||||
# Default: 1GB
|
||||
# info-log-max-size = "1GB"
|
||||
|
||||
# Time for the Rocksdb info log file to roll (in seconds).
|
||||
# If specified with non-zero value, log file will be rolled
|
||||
# if it has been active longer than `log_file_time_to_roll`.
|
||||
# Default: 0 (disabled)
|
||||
# info-log-roll-time = "0"
|
||||
|
||||
# Maximal Rocksdb info log files to be kept.
|
||||
# Default: 10
|
||||
# info-log-keep-log-file-num = 10
|
||||
|
||||
# This specifies the Rocksdb info LOG dir.
|
||||
# If it is empty, the log files will be in the same dir as data.
|
||||
# If it is non empty, the log files will be in the specified dir,
|
||||
# and the db data dir's absolute path will be used as the log file
|
||||
# name's prefix.
|
||||
# Default: empty
|
||||
# info-log-dir = ""
|
||||
|
||||
# Column Family default used to store actual data of the database.
|
||||
[rocksdb.defaultcf]
|
||||
# compression method (if any) is used to compress a block.
|
||||
# no: kNoCompression
|
||||
# snappy: kSnappyCompression
|
||||
# zlib: kZlibCompression
|
||||
# bzip2: kBZip2Compression
|
||||
# lz4: kLZ4Compression
|
||||
# lz4hc: kLZ4HCCompression
|
||||
# zstd: kZSTD
|
||||
|
||||
# per level compression
|
||||
# compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
|
||||
|
||||
# Approximate size of user data packed per block. Note that the
|
||||
# block size specified here corresponds to uncompressed data.
|
||||
# block-size = "64KB"
|
||||
|
||||
# If you're doing point lookups you definitely want to turn bloom filters on, We use
|
||||
# bloom filters to avoid unnecessary disk reads. Default bits_per_key is 10, which
|
||||
# yields ~1% false positive rate. Larger bits_per_key values will reduce false positive
|
||||
# rate, but increase memory usage and space amplification.
|
||||
# bloom-filter-bits-per-key = 10
|
||||
|
||||
# false means one sst file one bloom filter, true means evry block has a corresponding bloom filter
|
||||
# block-based-bloom-filter = false
|
||||
|
||||
# level0-file-num-compaction-trigger = 4
|
||||
|
||||
# Soft limit on number of level-0 files. We start slowing down writes at this point.
|
||||
# level0-slowdown-writes-trigger = 20
|
||||
|
||||
# Maximum number of level-0 files. We stop writes at this point.
|
||||
# level0-stop-writes-trigger = 36
|
||||
|
||||
# Amount of data to build up in memory (backed by an unsorted log
|
||||
# on disk) before converting to a sorted on-disk file.
|
||||
# write-buffer-size = "128MB"
|
||||
|
||||
# The maximum number of write buffers that are built up in memory.
|
||||
# max-write-buffer-number = 5
|
||||
|
||||
# The minimum number of write buffers that will be merged together
|
||||
# before writing to storage.
|
||||
# min-write-buffer-number-to-merge = 1
|
||||
|
||||
# Control maximum total data size for base level (level 1).
|
||||
# max-bytes-for-level-base = "512MB"
|
||||
|
||||
# Target file size for compaction.
|
||||
# target-file-size-base = "8MB"
|
||||
|
||||
# Max bytes for compaction.max_compaction_bytes
|
||||
# max-compaction-bytes = "2GB"
|
||||
|
||||
# There are four different algorithms to pick files to compact.
|
||||
# 0 : ByCompensatedSize
|
||||
# 1 : OldestLargestSeqFirst
|
||||
# 2 : OldestSmallestSeqFirst
|
||||
# 3 : MinOverlappingRatio
|
||||
# compaction-pri = 3
|
||||
|
||||
# block-cache used to cache uncompressed blocks, big block-cache can speed up read.
|
||||
# in normal cases should tune to 30%-50% system's total memory.
|
||||
# block-cache-size = "1GB"
|
||||
|
||||
# Indicating if we'd put index/filter blocks to the block cache.
|
||||
# If not specified, each "table reader" object will pre-load index/filter block
|
||||
# during table initialization.
|
||||
# cache-index-and-filter-blocks = true
|
||||
|
||||
# Pin level0 filter and index blocks in cache.
|
||||
# pin-l0-filter-and-index-blocks = true
|
||||
|
||||
# Enable read amplication statistics.
|
||||
# value => memory usage (percentage of loaded blocks memory)
|
||||
# 1 => 12.50 %
|
||||
# 2 => 06.25 %
|
||||
# 4 => 03.12 %
|
||||
# 8 => 01.56 %
|
||||
# 16 => 00.78 %
|
||||
# read-amp-bytes-per-bit = 0
|
||||
|
||||
# Pick target size of each level dynamically.
|
||||
# dynamic-level-bytes = true
|
||||
|
||||
# Options for Column Family write
|
||||
# Column Family write used to store commit informations in MVCC model
|
||||
[rocksdb.writecf]
|
||||
# compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
|
||||
# block-size = "64KB"
|
||||
# write-buffer-size = "128MB"
|
||||
# max-write-buffer-number = 5
|
||||
# min-write-buffer-number-to-merge = 1
|
||||
# max-bytes-for-level-base = "512MB"
|
||||
# target-file-size-base = "8MB"
|
||||
|
||||
# in normal cases should tune to 10%-30% system's total memory.
|
||||
# block-cache-size = "256MB"
|
||||
# level0-file-num-compaction-trigger = 4
|
||||
# level0-slowdown-writes-trigger = 20
|
||||
# level0-stop-writes-trigger = 36
|
||||
# cache-index-and-filter-blocks = true
|
||||
# pin-l0-filter-and-index-blocks = true
|
||||
# compaction-pri = 3
|
||||
# read-amp-bytes-per-bit = 0
|
||||
# dynamic-level-bytes = true
|
||||
|
||||
[rocksdb.lockcf]
|
||||
# compression-per-level = ["no", "no", "no", "no", "no", "no", "no"]
|
||||
# block-size = "16KB"
|
||||
# write-buffer-size = "128MB"
|
||||
# max-write-buffer-number = 5
|
||||
# min-write-buffer-number-to-merge = 1
|
||||
# max-bytes-for-level-base = "128MB"
|
||||
# target-file-size-base = "8MB"
|
||||
# block-cache-size = "256MB"
|
||||
# level0-file-num-compaction-trigger = 1
|
||||
# level0-slowdown-writes-trigger = 20
|
||||
# level0-stop-writes-trigger = 36
|
||||
# cache-index-and-filter-blocks = true
|
||||
# pin-l0-filter-and-index-blocks = true
|
||||
# compaction-pri = 0
|
||||
# read-amp-bytes-per-bit = 0
|
||||
# dynamic-level-bytes = true
|
||||
|
||||
[raftdb]
|
||||
# max-sub-compactions = 1
|
||||
max-open-files = 1024
|
||||
# max-manifest-file-size = "20MB"
|
||||
# create-if-missing = true
|
||||
|
||||
# enable-statistics = true
|
||||
# stats-dump-period = "10m"
|
||||
|
||||
# compaction-readahead-size = 0
|
||||
# writable-file-max-buffer-size = "1MB"
|
||||
# use-direct-io-for-flush-and-compaction = false
|
||||
# enable-pipelined-write = true
|
||||
# allow-concurrent-memtable-write = false
|
||||
# bytes-per-sync = "0MB"
|
||||
# wal-bytes-per-sync = "0KB"
|
||||
|
||||
# info-log-max-size = "1GB"
|
||||
# info-log-roll-time = "0"
|
||||
# info-log-keep-log-file-num = 10
|
||||
# info-log-dir = ""
|
||||
|
||||
[raftdb.defaultcf]
|
||||
# compression-per-level = ["no", "no", "lz4", "lz4", "lz4", "zstd", "zstd"]
|
||||
# block-size = "64KB"
|
||||
# write-buffer-size = "128MB"
|
||||
# max-write-buffer-number = 5
|
||||
# min-write-buffer-number-to-merge = 1
|
||||
# max-bytes-for-level-base = "512MB"
|
||||
# target-file-size-base = "8MB"
|
||||
|
||||
# should tune to 256MB~2GB.
|
||||
# block-cache-size = "256MB"
|
||||
# level0-file-num-compaction-trigger = 4
|
||||
# level0-slowdown-writes-trigger = 20
|
||||
# level0-stop-writes-trigger = 36
|
||||
# cache-index-and-filter-blocks = true
|
||||
# pin-l0-filter-and-index-blocks = true
|
||||
# compaction-pri = 0
|
||||
# read-amp-bytes-per-bit = 0
|
||||
# dynamic-level-bytes = true
|
||||
|
||||
[security]
|
||||
# set the path for certificates. Empty string means disabling secure connectoins.
|
||||
# ca-path = ""
|
||||
# cert-path = ""
|
||||
# key-path = ""
|
||||
|
||||
[import]
|
||||
# the directory to store importing kv data.
|
||||
# import-dir = "/tmp/tikv/import"
|
||||
# number of threads to handle RPC requests.
|
||||
# num-threads = 8
|
||||
# stream channel window size, stream will be blocked on channel full.
|
||||
# stream-channel-window = 128
|
||||
@ -1,7 +0,0 @@
|
||||
FROM python:2.7-alpine
|
||||
|
||||
RUN apk add --no-cache ca-certificates curl
|
||||
|
||||
ADD dashboards /
|
||||
|
||||
ENTRYPOINT ["/tidb-dashboard-installer.sh"]
|
||||
@ -1,13 +0,0 @@
|
||||
# TiDB dashboard installer
|
||||
|
||||
This image is used to configure Grafana datasource and dashboards for TiDB cluster. It is used in [tidb-docker-compose](https://github.com/pingcap/tidb-docker-compose) and [tidb-operator](https://github.com/pingcap/tidb-operator).
|
||||
|
||||
The JSON files in dashboards are copied from [tidb-ansible](https://github.com/pingcap/tidb-ansible/tree/master/scripts).
|
||||
|
||||
Grafana version prior to v5.0.0 can only use import API to automate datasource and dashboard configuration. So this image is needed to run in docker environment. It runs only once in this environment.
|
||||
|
||||
With Grafana v5.x, we can use [provisioning](http://docs.grafana.org/administration/provisioning) feature to statically provision datasources and dashboards. No need to use scripts to configure Grafana.
|
||||
|
||||
But currently, the dashboards in [tidb-ansible](https://github.com/pingcap/tidb-ansible/tree/master/scripts) repository are incompatible with Grafana v5.x and cannot be statically provisioned. So this image is still required.
|
||||
|
||||
In the future, we can use [grafonnet](https://github.com/grafana/grafonnet-lib) to migrate old dashboards and make dashboard updating reviewable.
|
||||
@ -1,201 +0,0 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@ -1,7 +0,0 @@
|
||||
{
|
||||
"name": "tidb-cluster",
|
||||
"type": "prometheus",
|
||||
"url": "http://127.0.0.1:9090",
|
||||
"access": "proxy",
|
||||
"basicAuth": false
|
||||
}
|
||||
@ -1,12 +0,0 @@
|
||||
[
|
||||
{
|
||||
"datasource": "tidb-cluster",
|
||||
"name": "TiDB-Cluster",
|
||||
"titles": {
|
||||
"overview": "TiDB-Cluster-Overview",
|
||||
"pd": "TiDB-Cluster-PD",
|
||||
"tidb": "TiDB-Cluster-TiDB",
|
||||
"tikv": "TiDB-Cluster-TiKV"
|
||||
}
|
||||
}
|
||||
]
|
||||
@ -1,110 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
from __future__ import print_function, \
|
||||
unicode_literals
|
||||
|
||||
import urllib
|
||||
import urllib2
|
||||
import base64
|
||||
import json
|
||||
import sys
|
||||
from pprint import pprint
|
||||
|
||||
try:
|
||||
input = raw_input
|
||||
except:
|
||||
pass
|
||||
|
||||
############################################################
|
||||
################## CONFIGURATION ###########################
|
||||
############################################################
|
||||
|
||||
# use a viewer key
|
||||
src = dict(
|
||||
dashboards={
|
||||
"pd" : 'pd.json',
|
||||
"tidb": 'tidb.json',
|
||||
"tikv": 'tikv.json',
|
||||
"overview": 'overview.json'
|
||||
})
|
||||
|
||||
dests = [
|
||||
]
|
||||
|
||||
if not dests:
|
||||
with open("./dests.json") as fp:
|
||||
dests = json.load(fp)
|
||||
|
||||
|
||||
############################################################
|
||||
################## CONFIGURATION ENDS ######################
|
||||
############################################################
|
||||
|
||||
def export_dashboard(api_url, api_key, dashboard_name):
|
||||
req = urllib2.Request(api_url + 'api/dashboards/db/' + dashboard_name,
|
||||
headers={'Authorization': "Bearer {}".format(api_key)})
|
||||
|
||||
resp = urllib2.urlopen(req)
|
||||
data = json.load(resp)
|
||||
return data['dashboard']
|
||||
|
||||
|
||||
def fill_dashboard_with_dest_config(dashboard, dest, type_='node'):
|
||||
dashboard['title'] = dest['titles'][type_]
|
||||
dashboard['id'] = None
|
||||
# pprint(dashboard)
|
||||
for row in dashboard['rows']:
|
||||
for panel in row['panels']:
|
||||
panel['datasource'] = dest['datasource']
|
||||
|
||||
if 'templating' in dashboard:
|
||||
for templating in dashboard['templating']['list']:
|
||||
if templating['type'] == 'query':
|
||||
templating['current'] = {}
|
||||
templating['options'] = []
|
||||
templating['datasource'] = dest['datasource']
|
||||
|
||||
if 'annotations' in dashboard:
|
||||
for annotation in dashboard['annotations']['list']:
|
||||
annotation['datasource'] = dest['datasource']
|
||||
return dashboard
|
||||
|
||||
def import_dashboard_via_user_pass(api_url, user, password, dashboard):
|
||||
payload = {'dashboard': dashboard,
|
||||
'overwrite': True}
|
||||
auth_string = base64.b64encode('%s:%s' % (user, password))
|
||||
headers = {'Authorization': "Basic {}".format(auth_string),
|
||||
'Content-Type': 'application/json'}
|
||||
req = urllib2.Request(api_url + 'api/dashboards/db',
|
||||
headers=headers,
|
||||
data=json.dumps(payload))
|
||||
try:
|
||||
resp = urllib2.urlopen(req)
|
||||
data = json.load(resp)
|
||||
return data
|
||||
except urllib2.HTTPError, error:
|
||||
data = json.load(error)
|
||||
return data
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
url = sys.argv[1]
|
||||
user = sys.argv[2]
|
||||
password = sys.argv[3]
|
||||
print(url)
|
||||
for type_ in src['dashboards']:
|
||||
print("[load] from <{}>:{}".format(
|
||||
src['dashboards'][type_], type_))
|
||||
|
||||
dashboard = json.load(open(src['dashboards'][type_]))
|
||||
|
||||
for dest in dests:
|
||||
dashboard = fill_dashboard_with_dest_config(dashboard, dest, type_)
|
||||
print("[import] as <{}> to [{}]".format(
|
||||
dashboard['title'], dest['name']), end='\t............. ')
|
||||
ret = import_dashboard_via_user_pass(url, user, password, dashboard)
|
||||
print(ret)
|
||||
|
||||
if ret['status'] != 'success':
|
||||
print(' > ERROR: ', ret)
|
||||
raise RuntimeError
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,12 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
url=$1
|
||||
userName=${GRAFANA_USERNAME:-admin}
|
||||
password=${GRAFANA_PASSWORD:-admin}
|
||||
datasource_url="http://${url}/api/datasources"
|
||||
echo "Adding datasource..."
|
||||
until curl -s -XPOST -H "Content-Type: application/json" --connect-timeout 1 -u ${userName}:${password} ${datasource_url} -d @/datasource.json >/dev/null; do
|
||||
sleep 1
|
||||
done
|
||||
|
||||
python grafana-config-copy.py "http://${url}/" ${userName} ${password}
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,448 +0,0 @@
|
||||
---
|
||||
# Source: tidb-docker-compose/templates/docker-compose.yml
|
||||
# dashboard-installer has been deleted, because we don't need it any more in docker-compose model
|
||||
version: '2.1'
|
||||
|
||||
services:
|
||||
pd0:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd0
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd0:2379
|
||||
- --advertise-peer-urls=http://pd0:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd0
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd0.log
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
|
||||
pd1:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd1
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd1:2379
|
||||
- --advertise-peer-urls=http://pd1:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd1
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd1.log
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
|
||||
pd2:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd2
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd2:2379
|
||||
- --advertise-peer-urls=http://pd2:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd2
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd2.log
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
|
||||
tikv0:
|
||||
image: pingcap/tikv:latest
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv0:20160
|
||||
- --data-dir=/data/tikv0
|
||||
- --pd=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tikv.toml
|
||||
- --log-file=/logs/tikv0.log
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
|
||||
tikv1:
|
||||
image: pingcap/tikv:latest
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv1:20160
|
||||
- --data-dir=/data/tikv1
|
||||
- --pd=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tikv.toml
|
||||
- --log-file=/logs/tikv1.log
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
|
||||
tikv2:
|
||||
image: pingcap/tikv:latest
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv2:20160
|
||||
- --data-dir=/data/tikv2
|
||||
- --pd=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tikv.toml
|
||||
- --log-file=/logs/tikv2.log
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
|
||||
pump0:
|
||||
image: pingcap/tidb-binlog:latest
|
||||
volumes:
|
||||
- ./config/pump.toml:/pump.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- /pump
|
||||
- --addr=0.0.0.0:8250
|
||||
- --advertise-addr=pump0:8250
|
||||
- --data-dir=/data/pump0
|
||||
- --log-file=/logs/pump0.log
|
||||
- --node-id=pump0
|
||||
- --pd-urls=http://pd0:2379,http://pd1:2379,http://pd2:2379
|
||||
- --config=/pump.toml
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
restart: on-failure
|
||||
|
||||
pump1:
|
||||
image: pingcap/tidb-binlog:latest
|
||||
volumes:
|
||||
- ./config/pump.toml:/pump.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- /pump
|
||||
- --addr=0.0.0.0:8250
|
||||
- --advertise-addr=pump1:8250
|
||||
- --data-dir=/data/pump1
|
||||
- --log-file=/logs/pump1.log
|
||||
- --node-id=pump1
|
||||
- --pd-urls=http://pd0:2379,http://pd1:2379,http://pd2:2379
|
||||
- --config=/pump.toml
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
restart: on-failure
|
||||
|
||||
pump2:
|
||||
image: pingcap/tidb-binlog:latest
|
||||
volumes:
|
||||
- ./config/pump.toml:/pump.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- /pump
|
||||
- --addr=0.0.0.0:8250
|
||||
- --advertise-addr=pump2:8250
|
||||
- --data-dir=/data/pump2
|
||||
- --log-file=/logs/pump2.log
|
||||
- --node-id=pump2
|
||||
- --pd-urls=http://pd0:2379,http://pd1:2379,http://pd2:2379
|
||||
- --config=/pump.toml
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
restart: on-failure
|
||||
|
||||
drainer:
|
||||
image: pingcap/tidb-binlog:latest
|
||||
volumes:
|
||||
- ./config/drainer.toml:/drainer.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- /drainer
|
||||
- --addr=0.0.0.0:8249
|
||||
- --data-dir=/data/data.drainer
|
||||
- --log-file=/logs/drainer.log
|
||||
- --pd-urls=http://pd0:2379,http://pd1:2379,http://pd2:2379
|
||||
- --config=/drainer.toml
|
||||
- --initial-commit-ts=0
|
||||
- --dest-db-type=kafka
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
- "kafka0"
|
||||
- "kafka1"
|
||||
- "kafka2"
|
||||
restart: on-failure
|
||||
|
||||
zoo0:
|
||||
image: zookeeper:latest
|
||||
ports:
|
||||
- "2181:2181"
|
||||
environment:
|
||||
ZOO_MY_ID: 1
|
||||
ZOO_PORT: 2181
|
||||
ZOO_SERVERS: server.1=zoo0:2888:3888 server.2=zoo1:2888:3888 server.3=zoo2:2888:3888
|
||||
volumes:
|
||||
- ./data/zoo0/data:/data
|
||||
- ./data/zoo0/datalog:/datalog
|
||||
restart: on-failure
|
||||
|
||||
zoo1:
|
||||
image: zookeeper:latest
|
||||
ports:
|
||||
- "2182:2182"
|
||||
environment:
|
||||
ZOO_MY_ID: 2
|
||||
ZOO_PORT: 2182
|
||||
ZOO_SERVERS: server.1=zoo0:2888:3888 server.2=zoo1:2888:3888 server.3=zoo2:2888:3888
|
||||
volumes:
|
||||
- ./data/zoo1/data:/data
|
||||
- ./data/zoo1/datalog:/datalog
|
||||
restart: on-failure
|
||||
|
||||
zoo2:
|
||||
image: zookeeper:latest
|
||||
ports:
|
||||
- "2183:2183"
|
||||
environment:
|
||||
ZOO_MY_ID: 3
|
||||
ZOO_PORT: 2183
|
||||
ZOO_SERVERS: server.1=zoo0:2888:3888 server.2=zoo1:2888:3888 server.3=zoo2:2888:3888
|
||||
volumes:
|
||||
- ./data/zoo2/data:/data
|
||||
- ./data/zoo2/datalog:/datalog
|
||||
restart: on-failure
|
||||
|
||||
kafka0:
|
||||
image: wurstmeister/kafka:2.12-2.1.1
|
||||
ports:
|
||||
- "9092:9092"
|
||||
environment:
|
||||
KAFKA_BROKER_ID: 1
|
||||
KAFKA_LOG_DIRS: /data/kafka-logs
|
||||
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka0:9092
|
||||
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
|
||||
KAFKA_ZOOKEEPER_CONNECT: zoo0:2181,zoo1:2182,zoo2:2183
|
||||
volumes:
|
||||
- ./data/kafka-logs/kafka0:/data/kafka-logs
|
||||
- ./logs/kafka0:/opt/kafka/logs
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
depends_on:
|
||||
- "zoo0"
|
||||
- "zoo1"
|
||||
- "zoo2"
|
||||
restart: on-failure
|
||||
kafka1:
|
||||
image: wurstmeister/kafka:2.12-2.1.1
|
||||
ports:
|
||||
- "9093:9093"
|
||||
environment:
|
||||
KAFKA_BROKER_ID: 2
|
||||
KAFKA_LOG_DIRS: /data/kafka-logs
|
||||
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka1:9093
|
||||
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
|
||||
KAFKA_ZOOKEEPER_CONNECT: zoo0:2181,zoo1:2182,zoo2:2183
|
||||
volumes:
|
||||
- ./data/kafka-logs/kafka1:/data/kafka-logs
|
||||
- ./logs/kafka1:/opt/kafka/logs
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
depends_on:
|
||||
- "zoo0"
|
||||
- "zoo1"
|
||||
- "zoo2"
|
||||
restart: on-failure
|
||||
kafka2:
|
||||
image: wurstmeister/kafka:2.12-2.1.1
|
||||
ports:
|
||||
- "9094:9094"
|
||||
environment:
|
||||
KAFKA_BROKER_ID: 3
|
||||
KAFKA_LOG_DIRS: /data/kafka-logs
|
||||
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka2:9094
|
||||
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094
|
||||
KAFKA_ZOOKEEPER_CONNECT: zoo0:2181,zoo1:2182,zoo2:2183
|
||||
volumes:
|
||||
- ./data/kafka-logs/kafka2:/data/kafka-logs
|
||||
- ./logs/kafka2:/opt/kafka/logs
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
depends_on:
|
||||
- "zoo0"
|
||||
- "zoo1"
|
||||
- "zoo2"
|
||||
restart: on-failure
|
||||
|
||||
tidb:
|
||||
image: pingcap/tidb:latest
|
||||
ports:
|
||||
- "4000:4000"
|
||||
- "10080:10080"
|
||||
volumes:
|
||||
- ./config/tidb.toml:/tidb.toml:ro
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --store=tikv
|
||||
- --path=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tidb.toml
|
||||
- --log-file=/logs/tidb.log
|
||||
- --advertise-address=tidb
|
||||
- --enable-binlog=true
|
||||
depends_on:
|
||||
- "tikv0"
|
||||
- "tikv1"
|
||||
- "tikv2"
|
||||
- "pump0"
|
||||
- "pump1"
|
||||
- "pump2"
|
||||
# sysctls:
|
||||
# net.core.somaxconn: 32768
|
||||
# ulimits:
|
||||
# nofile:
|
||||
# soft: 1000000
|
||||
# hard: 1000000
|
||||
restart: on-failure
|
||||
|
||||
tispark-master:
|
||||
image: pingcap/tispark:latest
|
||||
command:
|
||||
- /opt/spark/sbin/start-master.sh
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_MASTER_PORT: 7077
|
||||
SPARK_MASTER_WEBUI_PORT: 8080
|
||||
ports:
|
||||
- "7077:7077"
|
||||
- "8080:8080"
|
||||
depends_on:
|
||||
- "tikv0"
|
||||
- "tikv1"
|
||||
- "tikv2"
|
||||
restart: on-failure
|
||||
tispark-slave0:
|
||||
image: pingcap/tispark:latest
|
||||
command:
|
||||
- /opt/spark/sbin/start-slave.sh
|
||||
- spark://tispark-master:7077
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_WORKER_WEBUI_PORT: 38081
|
||||
ports:
|
||||
- "38081:38081"
|
||||
depends_on:
|
||||
- tispark-master
|
||||
restart: on-failure
|
||||
|
||||
tidb-vision:
|
||||
image: pingcap/tidb-vision:latest
|
||||
environment:
|
||||
PD_ENDPOINT: pd0:2379
|
||||
ports:
|
||||
- "8010:8010"
|
||||
restart: on-failure
|
||||
pushgateway:
|
||||
image: prom/pushgateway:v0.3.1
|
||||
command:
|
||||
- --log.level=error
|
||||
restart: on-failure
|
||||
|
||||
prometheus:
|
||||
user: root
|
||||
image: prom/prometheus:v2.2.1
|
||||
command:
|
||||
- --log.level=error
|
||||
- --storage.tsdb.path=/data/prometheus
|
||||
- --config.file=/etc/prometheus/prometheus.yml
|
||||
ports:
|
||||
- "9090:9090"
|
||||
volumes:
|
||||
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
- ./config/pd.rules.yml:/etc/prometheus/pd.rules.yml:ro
|
||||
- ./config/tikv.rules.yml:/etc/prometheus/tikv.rules.yml:ro
|
||||
- ./config/tidb.rules.yml:/etc/prometheus/tidb.rules.yml:ro
|
||||
- ./data:/data
|
||||
restart: on-failure
|
||||
grafana:
|
||||
image: grafana/grafana:5.3.0
|
||||
user: "0"
|
||||
environment:
|
||||
GF_LOG_LEVEL: error
|
||||
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
|
||||
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
|
||||
ports:
|
||||
- "3000:3000"
|
||||
volumes:
|
||||
- ./config/grafana:/etc/grafana
|
||||
- ./config/dashboards:/tmp/dashboards
|
||||
- ./data/grafana:/var/lib/grafana
|
||||
restart: on-failure
|
||||
@ -1,13 +0,0 @@
|
||||
version: '2.1'
|
||||
|
||||
services:
|
||||
tispark-tests:
|
||||
image: pingcap/tispark:latest
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
- ./tispark-tests/tests:/opt/spark/tests:ro
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: tidb-docker-compose_default
|
||||
@ -1,66 +0,0 @@
|
||||
version: '2.1'
|
||||
|
||||
services:
|
||||
pd0:
|
||||
image: pingcap/pd:nightly
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd-nightly-tiflash.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd0
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd0:2379
|
||||
- --advertise-peer-urls=http://pd0:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380
|
||||
- --data-dir=/data/pd
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd.log
|
||||
restart: on-failure
|
||||
tikv:
|
||||
image: pingcap/tikv:nightly
|
||||
volumes:
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv:20160
|
||||
- --status-addr=tikv:20180
|
||||
- --data-dir=/data/tikv
|
||||
- --pd=pd0:2379
|
||||
- --log-file=/logs/tikv.log
|
||||
depends_on:
|
||||
- "pd0"
|
||||
restart: on-failure
|
||||
tidb:
|
||||
image: pingcap/tidb:nightly
|
||||
ports:
|
||||
- "4000:4000"
|
||||
- "10080:10080"
|
||||
volumes:
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --status=10080
|
||||
- --advertise-address=tidb
|
||||
- --store=tikv
|
||||
- --path=pd0:2379
|
||||
- --log-file=/logs/tidb.log
|
||||
depends_on:
|
||||
- "tikv"
|
||||
restart: on-failure
|
||||
tiflash:
|
||||
image: pingcap/tiflash:nightly
|
||||
volumes:
|
||||
- ./config/tiflash-nightly.toml:/tiflash.toml:ro
|
||||
- ./config/tiflash-learner-nightly.toml:/tiflash-learner.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --config=/tiflash.toml
|
||||
depends_on:
|
||||
- "tikv"
|
||||
- "tidb"
|
||||
restart: on-failure
|
||||
@ -1,210 +0,0 @@
|
||||
version: '2.1'
|
||||
|
||||
services:
|
||||
pd0:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd0
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd0:2379
|
||||
- --advertise-peer-urls=http://pd0:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd0
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd0.log
|
||||
restart: on-failure
|
||||
pd1:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd1
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd1:2379
|
||||
- --advertise-peer-urls=http://pd1:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd1
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd1.log
|
||||
restart: on-failure
|
||||
pd2:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd2
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd2:2379
|
||||
- --advertise-peer-urls=http://pd2:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd2
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd2.log
|
||||
restart: on-failure
|
||||
tikv0:
|
||||
image: pingcap/tikv:latest
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv0:20160
|
||||
- --data-dir=/data/tikv0
|
||||
- --pd=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tikv.toml
|
||||
- --log-file=/logs/tikv0.log
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
restart: on-failure
|
||||
tikv1:
|
||||
image: pingcap/tikv:latest
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv1:20160
|
||||
- --data-dir=/data/tikv1
|
||||
- --pd=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tikv.toml
|
||||
- --log-file=/logs/tikv1.log
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
restart: on-failure
|
||||
tikv2:
|
||||
image: pingcap/tikv:latest
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --addr=0.0.0.0:20160
|
||||
- --advertise-addr=tikv2:20160
|
||||
- --data-dir=/data/tikv2
|
||||
- --pd=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tikv.toml
|
||||
- --log-file=/logs/tikv2.log
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
restart: on-failure
|
||||
|
||||
tidb:
|
||||
image: pingcap/tidb:latest
|
||||
ports:
|
||||
- "4000:4000"
|
||||
- "10080:10080"
|
||||
volumes:
|
||||
- ./config/tidb.toml:/tidb.toml:ro
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --store=tikv
|
||||
- --path=pd0:2379,pd1:2379,pd2:2379
|
||||
- --config=/tidb.toml
|
||||
- --log-file=/logs/tidb.log
|
||||
- --advertise-address=tidb
|
||||
depends_on:
|
||||
- "tikv0"
|
||||
- "tikv1"
|
||||
- "tikv2"
|
||||
restart: on-failure
|
||||
tispark-master:
|
||||
image: pingcap/tispark:latest
|
||||
command:
|
||||
- /opt/spark/sbin/start-master.sh
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_MASTER_PORT: 7077
|
||||
SPARK_MASTER_WEBUI_PORT: 8080
|
||||
ports:
|
||||
- "7077:7077"
|
||||
- "8080:8080"
|
||||
depends_on:
|
||||
- "tikv0"
|
||||
- "tikv1"
|
||||
- "tikv2"
|
||||
restart: on-failure
|
||||
tispark-slave0:
|
||||
image: pingcap/tispark:latest
|
||||
command:
|
||||
- /opt/spark/sbin/start-slave.sh
|
||||
- spark://tispark-master:7077
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_WORKER_WEBUI_PORT: 38081
|
||||
ports:
|
||||
- "38081:38081"
|
||||
depends_on:
|
||||
- tispark-master
|
||||
restart: on-failure
|
||||
|
||||
tidb-vision:
|
||||
image: pingcap/tidb-vision:latest
|
||||
environment:
|
||||
PD_ENDPOINT: pd0:2379
|
||||
ports:
|
||||
- "8010:8010"
|
||||
restart: on-failure
|
||||
|
||||
# monitors
|
||||
pushgateway:
|
||||
image: prom/pushgateway:v0.3.1
|
||||
command:
|
||||
- --log.level=error
|
||||
restart: on-failure
|
||||
prometheus:
|
||||
user: root
|
||||
image: prom/prometheus:v2.2.1
|
||||
command:
|
||||
- --log.level=error
|
||||
- --storage.tsdb.path=/data/prometheus
|
||||
- --config.file=/etc/prometheus/prometheus.yml
|
||||
ports:
|
||||
- "9090:9090"
|
||||
volumes:
|
||||
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
- ./config/pd.rules.yml:/etc/prometheus/pd.rules.yml:ro
|
||||
- ./config/tikv.rules.yml:/etc/prometheus/tikv.rules.yml:ro
|
||||
- ./config/tidb.rules.yml:/etc/prometheus/tidb.rules.yml:ro
|
||||
- ./data:/data
|
||||
restart: on-failure
|
||||
grafana:
|
||||
image: grafana/grafana:6.0.1
|
||||
user: "0"
|
||||
environment:
|
||||
GF_LOG_LEVEL: error
|
||||
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
|
||||
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
|
||||
volumes:
|
||||
- ./config/grafana:/etc/grafana
|
||||
- ./config/dashboards:/tmp/dashboards
|
||||
- ./data/grafana:/var/lib/grafana
|
||||
ports:
|
||||
- "3000:3000"
|
||||
restart: on-failure
|
||||
@ -1,171 +0,0 @@
|
||||
version: '3.3'
|
||||
|
||||
networks:
|
||||
default:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
|
||||
services:
|
||||
pd0:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd0
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd0:2379
|
||||
- --advertise-peer-urls=http://pd0:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd0
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd0.log
|
||||
pd1:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd1
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd1:2379
|
||||
- --advertise-peer-urls=http://pd1:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd1
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd1.log
|
||||
pd2:
|
||||
image: pingcap/pd:latest
|
||||
ports:
|
||||
- "2379"
|
||||
volumes:
|
||||
- ./config/pd.toml:/pd.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
command:
|
||||
- --name=pd2
|
||||
- --client-urls=http://0.0.0.0:2379
|
||||
- --peer-urls=http://0.0.0.0:2380
|
||||
- --advertise-client-urls=http://pd2:2379
|
||||
- --advertise-peer-urls=http://pd2:2380
|
||||
- --initial-cluster=pd0=http://pd0:2380,pd1=http://pd1:2380,pd2=http://pd2:2380
|
||||
- --data-dir=/data/pd2
|
||||
- --config=/pd.toml
|
||||
- --log-file=/logs/pd2.log
|
||||
tikv:
|
||||
image: pingcap/tikv:latest
|
||||
ports:
|
||||
- target: 20160
|
||||
published: 20160
|
||||
environment:
|
||||
- TASK_SLOT={{.Task.Slot}}
|
||||
volumes:
|
||||
- ./config/tikv.toml:/tikv.toml:ro
|
||||
- ./data:/data
|
||||
- ./logs:/logs
|
||||
entrypoint: [ "/bin/sh", "-c", "/tikv-server --advertise-addr=$$HOSTNAME:20160 --addr=0.0.0.0:20160 --data-dir=/data/tikv$$TASK_SLOT --pd=pd0:2379,pd1:2379,pd2:2379 --config=/tikv.toml --log-file=/logs/tikv$$TASK_SLOT.log --log-level=info" ]
|
||||
depends_on:
|
||||
- "pd0"
|
||||
- "pd1"
|
||||
- "pd2"
|
||||
deploy:
|
||||
replicas: 3
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
|
||||
tidb:
|
||||
image: pingcap/tidb:latest
|
||||
ports:
|
||||
- target: 4000
|
||||
published: 4000
|
||||
- target: 10080
|
||||
published: 10080
|
||||
environment:
|
||||
- TASK_SLOT={{.Task.Slot}}
|
||||
volumes:
|
||||
- ./config/tidb.toml:/tidb.toml:ro
|
||||
- ./logs:/logs
|
||||
entrypoint: [ "/bin/sh", "-c", "/tidb-server --advertise-address=$$HOSTNAME --store=tikv --path=pd0:2379,pd1:2379,pd2:2379 --config=/tidb.toml --log-file=/logs/tidb$$TASK_SLOT.log -L info" ]
|
||||
depends_on:
|
||||
- "tikv"
|
||||
deploy:
|
||||
replicas: 1
|
||||
|
||||
tispark-master:
|
||||
image: pingcap/tispark:latest
|
||||
command:
|
||||
- /opt/spark/sbin/start-master.sh
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_MASTER_PORT: 7077
|
||||
SPARK_MASTER_WEBUI_PORT: 8080
|
||||
ports:
|
||||
- "7077:7077"
|
||||
- "8080:8080"
|
||||
depends_on:
|
||||
- "tikv"
|
||||
deploy:
|
||||
replicas: 1
|
||||
tispark-slave:
|
||||
image: pingcap/tispark:latest
|
||||
command:
|
||||
- /opt/spark/sbin/start-slave.sh
|
||||
- spark://tispark-master:7077
|
||||
volumes:
|
||||
- ./config/spark-defaults.conf:/opt/spark/conf/spark-defaults.conf:ro
|
||||
environment:
|
||||
SPARK_WORKER_WEBUI_PORT: 38081
|
||||
ports:
|
||||
- "38081:38081"
|
||||
depends_on:
|
||||
- tispark-master
|
||||
deploy:
|
||||
replicas: 1
|
||||
|
||||
tidb-vision:
|
||||
image: pingcap/tidb-vision:latest
|
||||
environment:
|
||||
PD_ENDPOINT: pd0:2379
|
||||
ports:
|
||||
- "8010:8010"
|
||||
|
||||
# monitors
|
||||
pushgateway:
|
||||
image: prom/pushgateway:v0.3.1
|
||||
command:
|
||||
- --log.level=error
|
||||
prometheus:
|
||||
user: root
|
||||
image: prom/prometheus:v2.2.1
|
||||
command:
|
||||
- --log.level=error
|
||||
- --storage.tsdb.path=/data/prometheus
|
||||
- --config.file=/etc/prometheus/prometheus.yml
|
||||
ports:
|
||||
- "9090:9090"
|
||||
volumes:
|
||||
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
- ./config/pd.rules.yml:/etc/prometheus/pd.rules.yml:ro
|
||||
- ./config/tikv.rules.yml:/etc/prometheus/tikv.rules.yml:ro
|
||||
- ./config/tidb.rules.yml:/etc/prometheus/tidb.rules.yml:ro
|
||||
- ./data:/data
|
||||
grafana:
|
||||
image: grafana/grafana:6.0.1
|
||||
environment:
|
||||
GF_LOG_LEVEL: error
|
||||
GF_PATHS_PROVISIONING: /etc/grafana/provisioning
|
||||
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
|
||||
volumes:
|
||||
- ./config/grafana:/etc/grafana
|
||||
- ./config/dashboards:/var/lib/grafana/dashboards
|
||||
ports:
|
||||
- "3000:3000"
|
||||
@ -1,50 +0,0 @@
|
||||
FROM centos:7
|
||||
|
||||
RUN yum update -y && yum install -y \
|
||||
curl \
|
||||
file \
|
||||
gdb \
|
||||
git \
|
||||
iotop \
|
||||
linux-perf \
|
||||
mysql \
|
||||
net-tools \
|
||||
perf \
|
||||
perl \
|
||||
procps-ng \
|
||||
psmisc \
|
||||
strace \
|
||||
sysstat \
|
||||
tree \
|
||||
tcpdump \
|
||||
unzip \
|
||||
vim \
|
||||
wget \
|
||||
which \
|
||||
&& yum clean all \
|
||||
&& rm -rf /var/cache/yum/*
|
||||
|
||||
RUN wget -q http://download.pingcap.org/tidb-latest-linux-amd64.tar.gz \
|
||||
&& tar xzf tidb-latest-linux-amd64.tar.gz \
|
||||
&& mv tidb-latest-linux-amd64/bin/* /usr/local/bin/ \
|
||||
&& rm -rf tidb-latest-linux-amd64.tar.gz tidb-latest-linux-amd64
|
||||
|
||||
RUN wget https://github.com/brendangregg/FlameGraph/archive/master.zip \
|
||||
&& unzip master.zip \
|
||||
&& mv FlameGraph-master /opt/FlameGraph \
|
||||
&& rm master.zip
|
||||
ADD run_flamegraph.sh /run_flamegraph.sh
|
||||
|
||||
# used for go pprof
|
||||
ENV GOLANG_VERSION 1.10
|
||||
ENV GOLANG_DOWNLOAD_URL https://golang.org/dl/go$GOLANG_VERSION.linux-amd64.tar.gz
|
||||
ENV GOLANG_DOWNLOAD_SHA256 b5a64335f1490277b585832d1f6c7f8c6c11206cba5cd3f771dcb87b98ad1a33
|
||||
RUN curl -fsSL "$GOLANG_DOWNLOAD_URL" -o golang.tar.gz \
|
||||
&& echo "$GOLANG_DOWNLOAD_SHA256 golang.tar.gz" | sha256sum -c - \
|
||||
&& tar -C /usr/local -xzf golang.tar.gz \
|
||||
&& rm golang.tar.gz
|
||||
ENV GOPATH /go
|
||||
ENV GOROOT /usr/local/go
|
||||
ENV PATH $GOPATH/bin:$GOROOT/bin:$PATH
|
||||
|
||||
ENTRYPOINT ["/bin/bash"]
|
||||
@ -1,9 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
perf record -F 99 -p $1 -g -- sleep 60
|
||||
perf script > out.perf
|
||||
/opt/FlameGraph/stackcollapse-perf.pl out.perf > out.folded
|
||||
/opt/FlameGraph/flamegraph.pl out.folded > kernel.svg
|
||||
curl --upload-file ./kernel.svg https://transfer.sh/kernel.svg
|
||||
@ -1,9 +0,0 @@
|
||||
FROM alpine:3.5
|
||||
|
||||
ADD bin/pd-server /pd-server
|
||||
|
||||
WORKDIR /
|
||||
|
||||
EXPOSE 2379 2380
|
||||
|
||||
ENTRYPOINT ["/pd-server"]
|
||||
@ -1,11 +0,0 @@
|
||||
from alpine:3.5
|
||||
|
||||
ADD bin/pump /pump
|
||||
|
||||
ADD bin/drainer /drainer
|
||||
|
||||
RUN chmod +x /pump /drainer
|
||||
|
||||
WORKDIR /
|
||||
|
||||
EXPOSE 8249 8250
|
||||
@ -1,13 +0,0 @@
|
||||
FROM node:8
|
||||
|
||||
ADD tidb-vision /home/node/tidb-vision
|
||||
|
||||
WORKDIR /home/node/tidb-vision
|
||||
|
||||
RUN npm install
|
||||
|
||||
ENV PD_ENDPOINT=localhost:9000
|
||||
|
||||
EXPOSE 8010
|
||||
|
||||
CMD ["npm", "start"]
|
||||
@ -1,11 +0,0 @@
|
||||
from alpine:3.5
|
||||
|
||||
ADD bin/tidb-server /tidb-server
|
||||
|
||||
RUN chmod +x /tidb-server
|
||||
|
||||
WORKDIR /
|
||||
|
||||
EXPOSE 4000 10080
|
||||
|
||||
ENTRYPOINT ["/tidb-server"]
|
||||
@ -1,11 +0,0 @@
|
||||
FROM pingcap/alpine-glibc
|
||||
|
||||
ADD bin/tikv-server /tikv-server
|
||||
|
||||
RUN chmod +x /tikv-server
|
||||
|
||||
WORKDIR /
|
||||
|
||||
EXPOSE 20160
|
||||
|
||||
ENTRYPOINT ["/tikv-server"]
|
||||
@ -1,40 +0,0 @@
|
||||
FROM anapsix/alpine-java:8
|
||||
|
||||
ENV SPARK_VERSION=2.4.3 \
|
||||
HADOOP_VERSION=2.7 \
|
||||
TISPARK_PYTHON_VERSION=2.0 \
|
||||
SPARK_HOME=/opt/spark \
|
||||
SPARK_NO_DAEMONIZE=true \
|
||||
SPARK_MASTER_PORT=7077 \
|
||||
SPARK_MASTER_HOST=0.0.0.0 \
|
||||
SPARK_MASTER_WEBUI_PORT=8080
|
||||
|
||||
ADD tispark-tests /opt/tispark-tests
|
||||
|
||||
# base image only contains busybox version nohup and ps
|
||||
# spark scripts needs nohup in coreutils and ps in procps
|
||||
# and we can use mysql-client to test tidb connection
|
||||
RUN apk --no-cache add \
|
||||
coreutils \
|
||||
mysql-client \
|
||||
procps \
|
||||
python \
|
||||
py-pip \
|
||||
R
|
||||
|
||||
RUN wget -q https://download.pingcap.org/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz \
|
||||
&& tar zxf spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz -C /opt/ \
|
||||
&& ln -s /opt/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION} ${SPARK_HOME} \
|
||||
&& wget -q http://download.pingcap.org/tispark-assembly-latest-linux-amd64.tar.gz \
|
||||
&& tar zxf ./tispark-assembly-latest-linux-amd64.tar.gz -C /opt/ \
|
||||
&& cp /opt/assembly/target/tispark-assembly-*.jar ${SPARK_HOME}/jars \
|
||||
&& wget -q http://download.pingcap.org/tispark-sample-data.tar.gz \
|
||||
&& tar zxf tispark-sample-data.tar.gz -C ${SPARK_HOME}/data/ \
|
||||
&& rm -rf /opt/assembly/ spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz tispark-latest-linux-amd64.tar.gz tispark-sample-data.tar.gz
|
||||
|
||||
ADD spark-${SPARK_VERSION}/session.py ${SPARK_HOME}/python/pyspark/sql/
|
||||
ADD conf/log4j.properties /opt/spark/conf/log4j.properties
|
||||
|
||||
ENV PYTHONPATH=${SPARK_HOME}/python/lib/py4j-0.10.4-src.zip:${SPARK_HOME}/python:$PYTHONPATH
|
||||
|
||||
WORKDIR ${SPARK_HOME}
|
||||
@ -1,43 +0,0 @@
|
||||
#
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
# Set everything to be logged to the console
|
||||
log4j.rootCategory=INFO, console
|
||||
log4j.appender.console=org.apache.log4j.ConsoleAppender
|
||||
log4j.appender.console.target=System.err
|
||||
log4j.appender.console.layout=org.apache.log4j.PatternLayout
|
||||
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
|
||||
|
||||
# Set the default spark-shell log level to WARN. When running the spark-shell, the
|
||||
# log level for this class is used to overwrite the root logger's log level, so that
|
||||
# the user can have different defaults for the shell and regular Spark apps.
|
||||
log4j.logger.org.apache.spark.repl.Main=WARN
|
||||
|
||||
# Settings to quiet third party logs that are too verbose
|
||||
log4j.logger.org.spark_project.jetty=WARN
|
||||
log4j.logger.org.spark_project.jetty.util.component.AbstractLifeCycle=ERROR
|
||||
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
|
||||
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
|
||||
log4j.logger.org.apache.parquet=ERROR
|
||||
log4j.logger.parquet=ERROR
|
||||
|
||||
# SPARK-9183: Settings to avoid annoying messages when looking up nonexistent UDFs in SparkSQL with Hive support
|
||||
log4j.logger.org.apache.hadoop.hive.metastore.RetryingHMSHandler=FATAL
|
||||
log4j.logger.org.apache.hadoop.hive.ql.exec.FunctionRegistry=ERROR
|
||||
|
||||
# tispark disable "WARN ObjectStore:568 - Failed to get database"
|
||||
log4j.logger.org.apache.hadoop.hive.metastore.ObjectStore=ERROR
|
||||
@ -1,811 +0,0 @@
|
||||
#
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
import warnings
|
||||
from functools import reduce
|
||||
from threading import RLock
|
||||
|
||||
if sys.version >= '3':
|
||||
basestring = unicode = str
|
||||
xrange = range
|
||||
else:
|
||||
from itertools import izip as zip, imap as map
|
||||
|
||||
from pyspark import since
|
||||
from pyspark.rdd import RDD, ignore_unicode_prefix
|
||||
from pyspark.sql.conf import RuntimeConfig
|
||||
from pyspark.sql.dataframe import DataFrame
|
||||
from pyspark.sql.readwriter import DataFrameReader
|
||||
from pyspark.sql.streaming import DataStreamReader
|
||||
from pyspark.sql.types import Row, DataType, StringType, StructType, TimestampType, \
|
||||
_make_type_verifier, _infer_schema, _has_nulltype, _merge_type, _create_converter, \
|
||||
_parse_datatype_string
|
||||
from pyspark.sql.utils import install_exception_handler
|
||||
|
||||
__all__ = ["SparkSession"]
|
||||
|
||||
|
||||
def _monkey_patch_RDD(sparkSession):
|
||||
def toDF(self, schema=None, sampleRatio=None):
|
||||
"""
|
||||
Converts current :class:`RDD` into a :class:`DataFrame`
|
||||
|
||||
This is a shorthand for ``spark.createDataFrame(rdd, schema, sampleRatio)``
|
||||
|
||||
:param schema: a :class:`pyspark.sql.types.StructType` or list of names of columns
|
||||
:param samplingRatio: the sample ratio of rows used for inferring
|
||||
:return: a DataFrame
|
||||
|
||||
>>> rdd.toDF().collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
"""
|
||||
return sparkSession.createDataFrame(self, schema, sampleRatio)
|
||||
|
||||
RDD.toDF = toDF
|
||||
|
||||
|
||||
class SparkSession(object):
|
||||
"""The entry point to programming Spark with the Dataset and DataFrame API.
|
||||
|
||||
A SparkSession can be used create :class:`DataFrame`, register :class:`DataFrame` as
|
||||
tables, execute SQL over tables, cache tables, and read parquet files.
|
||||
To create a SparkSession, use the following builder pattern:
|
||||
|
||||
>>> spark = SparkSession.builder \\
|
||||
... .master("local") \\
|
||||
... .appName("Word Count") \\
|
||||
... .config("spark.some.config.option", "some-value") \\
|
||||
... .getOrCreate()
|
||||
|
||||
.. autoattribute:: builder
|
||||
:annotation:
|
||||
"""
|
||||
|
||||
class Builder(object):
|
||||
"""Builder for :class:`SparkSession`.
|
||||
"""
|
||||
|
||||
_lock = RLock()
|
||||
_options = {}
|
||||
|
||||
@since(2.0)
|
||||
def config(self, key=None, value=None, conf=None):
|
||||
"""Sets a config option. Options set using this method are automatically propagated to
|
||||
both :class:`SparkConf` and :class:`SparkSession`'s own configuration.
|
||||
|
||||
For an existing SparkConf, use `conf` parameter.
|
||||
|
||||
>>> from pyspark.conf import SparkConf
|
||||
>>> SparkSession.builder.config(conf=SparkConf())
|
||||
<pyspark.sql.session...
|
||||
|
||||
For a (key, value) pair, you can omit parameter names.
|
||||
|
||||
>>> SparkSession.builder.config("spark.some.config.option", "some-value")
|
||||
<pyspark.sql.session...
|
||||
|
||||
:param key: a key name string for configuration property
|
||||
:param value: a value for configuration property
|
||||
:param conf: an instance of :class:`SparkConf`
|
||||
"""
|
||||
with self._lock:
|
||||
if conf is None:
|
||||
self._options[key] = str(value)
|
||||
else:
|
||||
for (k, v) in conf.getAll():
|
||||
self._options[k] = v
|
||||
return self
|
||||
|
||||
@since(2.0)
|
||||
def master(self, master):
|
||||
"""Sets the Spark master URL to connect to, such as "local" to run locally, "local[4]"
|
||||
to run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone
|
||||
cluster.
|
||||
|
||||
:param master: a url for spark master
|
||||
"""
|
||||
return self.config("spark.master", master)
|
||||
|
||||
@since(2.0)
|
||||
def appName(self, name):
|
||||
"""Sets a name for the application, which will be shown in the Spark web UI.
|
||||
|
||||
If no application name is set, a randomly generated name will be used.
|
||||
|
||||
:param name: an application name
|
||||
"""
|
||||
return self.config("spark.app.name", name)
|
||||
|
||||
@since(2.0)
|
||||
def enableHiveSupport(self):
|
||||
"""Enables Hive support, including connectivity to a persistent Hive metastore, support
|
||||
for Hive serdes, and Hive user-defined functions.
|
||||
"""
|
||||
return self.config("spark.sql.catalogImplementation", "hive")
|
||||
|
||||
@since(2.0)
|
||||
def getOrCreate(self):
|
||||
"""Gets an existing :class:`SparkSession` or, if there is no existing one, creates a
|
||||
new one based on the options set in this builder.
|
||||
|
||||
This method first checks whether there is a valid global default SparkSession, and if
|
||||
yes, return that one. If no valid global default SparkSession exists, the method
|
||||
creates a new SparkSession and assigns the newly created SparkSession as the global
|
||||
default.
|
||||
|
||||
>>> s1 = SparkSession.builder.config("k1", "v1").getOrCreate()
|
||||
>>> s1.conf.get("k1") == s1.sparkContext.getConf().get("k1") == "v1"
|
||||
True
|
||||
|
||||
In case an existing SparkSession is returned, the config options specified
|
||||
in this builder will be applied to the existing SparkSession.
|
||||
|
||||
>>> s2 = SparkSession.builder.config("k2", "v2").getOrCreate()
|
||||
>>> s1.conf.get("k1") == s2.conf.get("k1")
|
||||
True
|
||||
>>> s1.conf.get("k2") == s2.conf.get("k2")
|
||||
True
|
||||
"""
|
||||
with self._lock:
|
||||
from pyspark.context import SparkContext
|
||||
from pyspark.conf import SparkConf
|
||||
session = SparkSession._instantiatedSession
|
||||
if session is None or session._sc._jsc is None:
|
||||
sparkConf = SparkConf()
|
||||
for key, value in self._options.items():
|
||||
sparkConf.set(key, value)
|
||||
sc = SparkContext.getOrCreate(sparkConf)
|
||||
# This SparkContext may be an existing one.
|
||||
for key, value in self._options.items():
|
||||
# we need to propagate the confs
|
||||
# before we create the SparkSession. Otherwise, confs like
|
||||
# warehouse path and metastore url will not be set correctly (
|
||||
# these confs cannot be changed once the SparkSession is created).
|
||||
sc._conf.set(key, value)
|
||||
session = SparkSession(sc)
|
||||
for key, value in self._options.items():
|
||||
session._jsparkSession.sessionState().conf().setConfString(key, value)
|
||||
for key, value in self._options.items():
|
||||
session.sparkContext._conf.set(key, value)
|
||||
return session
|
||||
|
||||
builder = Builder()
|
||||
"""A class attribute having a :class:`Builder` to construct :class:`SparkSession` instances"""
|
||||
|
||||
_instantiatedSession = None
|
||||
|
||||
@ignore_unicode_prefix
|
||||
def __init__(self, sparkContext, jsparkSession=None):
|
||||
"""Creates a new SparkSession.
|
||||
|
||||
>>> from datetime import datetime
|
||||
>>> spark = SparkSession(sc)
|
||||
>>> allTypes = sc.parallelize([Row(i=1, s="string", d=1.0, l=1,
|
||||
... b=True, list=[1, 2, 3], dict={"s": 0}, row=Row(a=1),
|
||||
... time=datetime(2014, 8, 1, 14, 1, 5))])
|
||||
>>> df = allTypes.toDF()
|
||||
>>> df.createOrReplaceTempView("allTypes")
|
||||
>>> spark.sql('select i+1, d+1, not b, list[1], dict["s"], time, row.a '
|
||||
... 'from allTypes where b and i > 0').collect()
|
||||
[Row((i + CAST(1 AS BIGINT))=2, (d + CAST(1 AS DOUBLE))=2.0, (NOT b)=False, list[1]=2, \
|
||||
dict[s]=0, time=datetime.datetime(2014, 8, 1, 14, 1, 5), a=1)]
|
||||
>>> df.rdd.map(lambda x: (x.i, x.s, x.d, x.l, x.b, x.time, x.row.a, x.list)).collect()
|
||||
[(1, u'string', 1.0, 1, True, datetime.datetime(2014, 8, 1, 14, 1, 5), 1, [1, 2, 3])]
|
||||
"""
|
||||
from pyspark.sql.context import SQLContext
|
||||
self._sc = sparkContext
|
||||
self._jsc = self._sc._jsc
|
||||
self._jvm = self._sc._jvm
|
||||
if jsparkSession is None:
|
||||
jsparkSession = self._jvm.SparkSession.builder().getOrCreate()
|
||||
self._jsparkSession = jsparkSession
|
||||
self._jwrapped = self._jsparkSession.sqlContext()
|
||||
self._wrapped = SQLContext(self._sc, self, self._jwrapped)
|
||||
_monkey_patch_RDD(self)
|
||||
install_exception_handler()
|
||||
# If we had an instantiated SparkSession attached with a SparkContext
|
||||
# which is stopped now, we need to renew the instantiated SparkSession.
|
||||
# Otherwise, we will use invalid SparkSession when we call Builder.getOrCreate.
|
||||
if SparkSession._instantiatedSession is None \
|
||||
or SparkSession._instantiatedSession._sc._jsc is None:
|
||||
SparkSession._instantiatedSession = self
|
||||
|
||||
def _repr_html_(self):
|
||||
return """
|
||||
<div>
|
||||
<p><b>SparkSession - {catalogImplementation}</b></p>
|
||||
{sc_HTML}
|
||||
</div>
|
||||
""".format(
|
||||
catalogImplementation=self.conf.get("spark.sql.catalogImplementation"),
|
||||
sc_HTML=self.sparkContext._repr_html_()
|
||||
)
|
||||
|
||||
@since(2.0)
|
||||
def newSession(self):
|
||||
"""
|
||||
Returns a new SparkSession as new session, that has separate SQLConf,
|
||||
registered temporary views and UDFs, but shared SparkContext and
|
||||
table cache.
|
||||
"""
|
||||
return self.__class__(self._sc, self._jsparkSession.newSession())
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def sparkContext(self):
|
||||
"""Returns the underlying :class:`SparkContext`."""
|
||||
return self._sc
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def version(self):
|
||||
"""The version of Spark on which this application is running."""
|
||||
return self._jsparkSession.version()
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def conf(self):
|
||||
"""Runtime configuration interface for Spark.
|
||||
|
||||
This is the interface through which the user can get and set all Spark and Hadoop
|
||||
configurations that are relevant to Spark SQL. When getting the value of a config,
|
||||
this defaults to the value set in the underlying :class:`SparkContext`, if any.
|
||||
"""
|
||||
if not hasattr(self, "_conf"):
|
||||
self._conf = RuntimeConfig(self._jsparkSession.conf())
|
||||
return self._conf
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def catalog(self):
|
||||
"""Interface through which the user may create, drop, alter or query underlying
|
||||
databases, tables, functions etc.
|
||||
|
||||
:return: :class:`Catalog`
|
||||
"""
|
||||
from pyspark.sql.catalog import Catalog
|
||||
if not hasattr(self, "_catalog"):
|
||||
self._catalog = Catalog(self)
|
||||
return self._catalog
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def udf(self):
|
||||
"""Returns a :class:`UDFRegistration` for UDF registration.
|
||||
|
||||
:return: :class:`UDFRegistration`
|
||||
"""
|
||||
from pyspark.sql.udf import UDFRegistration
|
||||
return UDFRegistration(self)
|
||||
|
||||
@since(2.0)
|
||||
def range(self, start, end=None, step=1, numPartitions=None):
|
||||
"""
|
||||
Create a :class:`DataFrame` with single :class:`pyspark.sql.types.LongType` column named
|
||||
``id``, containing elements in a range from ``start`` to ``end`` (exclusive) with
|
||||
step value ``step``.
|
||||
|
||||
:param start: the start value
|
||||
:param end: the end value (exclusive)
|
||||
:param step: the incremental step (default: 1)
|
||||
:param numPartitions: the number of partitions of the DataFrame
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
>>> spark.range(1, 7, 2).collect()
|
||||
[Row(id=1), Row(id=3), Row(id=5)]
|
||||
|
||||
If only one argument is specified, it will be used as the end value.
|
||||
|
||||
>>> spark.range(3).collect()
|
||||
[Row(id=0), Row(id=1), Row(id=2)]
|
||||
"""
|
||||
if numPartitions is None:
|
||||
numPartitions = self._sc.defaultParallelism
|
||||
|
||||
if end is None:
|
||||
jdf = self._jsparkSession.range(0, int(start), int(step), int(numPartitions))
|
||||
else:
|
||||
jdf = self._jsparkSession.range(int(start), int(end), int(step), int(numPartitions))
|
||||
|
||||
return DataFrame(jdf, self._wrapped)
|
||||
|
||||
def _inferSchemaFromList(self, data, names=None):
|
||||
"""
|
||||
Infer schema from list of Row or tuple.
|
||||
|
||||
:param data: list of Row or tuple
|
||||
:param names: list of column names
|
||||
:return: :class:`pyspark.sql.types.StructType`
|
||||
"""
|
||||
if not data:
|
||||
raise ValueError("can not infer schema from empty dataset")
|
||||
first = data[0]
|
||||
if type(first) is dict:
|
||||
warnings.warn("inferring schema from dict is deprecated,"
|
||||
"please use pyspark.sql.Row instead")
|
||||
schema = reduce(_merge_type, (_infer_schema(row, names) for row in data))
|
||||
if _has_nulltype(schema):
|
||||
raise ValueError("Some of types cannot be determined after inferring")
|
||||
return schema
|
||||
|
||||
def _inferSchema(self, rdd, samplingRatio=None, names=None):
|
||||
"""
|
||||
Infer schema from an RDD of Row or tuple.
|
||||
|
||||
:param rdd: an RDD of Row or tuple
|
||||
:param samplingRatio: sampling ratio, or no sampling (default)
|
||||
:return: :class:`pyspark.sql.types.StructType`
|
||||
"""
|
||||
first = rdd.first()
|
||||
if not first:
|
||||
raise ValueError("The first row in RDD is empty, "
|
||||
"can not infer schema")
|
||||
if type(first) is dict:
|
||||
warnings.warn("Using RDD of dict to inferSchema is deprecated. "
|
||||
"Use pyspark.sql.Row instead")
|
||||
|
||||
if samplingRatio is None:
|
||||
schema = _infer_schema(first, names=names)
|
||||
if _has_nulltype(schema):
|
||||
for row in rdd.take(100)[1:]:
|
||||
schema = _merge_type(schema, _infer_schema(row, names=names))
|
||||
if not _has_nulltype(schema):
|
||||
break
|
||||
else:
|
||||
raise ValueError("Some of types cannot be determined by the "
|
||||
"first 100 rows, please try again with sampling")
|
||||
else:
|
||||
if samplingRatio < 0.99:
|
||||
rdd = rdd.sample(False, float(samplingRatio))
|
||||
schema = rdd.map(lambda row: _infer_schema(row, names)).reduce(_merge_type)
|
||||
return schema
|
||||
|
||||
def _createFromRDD(self, rdd, schema, samplingRatio):
|
||||
"""
|
||||
Create an RDD for DataFrame from an existing RDD, returns the RDD and schema.
|
||||
"""
|
||||
if schema is None or isinstance(schema, (list, tuple)):
|
||||
struct = self._inferSchema(rdd, samplingRatio, names=schema)
|
||||
converter = _create_converter(struct)
|
||||
rdd = rdd.map(converter)
|
||||
if isinstance(schema, (list, tuple)):
|
||||
for i, name in enumerate(schema):
|
||||
struct.fields[i].name = name
|
||||
struct.names[i] = name
|
||||
schema = struct
|
||||
|
||||
elif not isinstance(schema, StructType):
|
||||
raise TypeError("schema should be StructType or list or None, but got: %s" % schema)
|
||||
|
||||
# convert python objects to sql data
|
||||
rdd = rdd.map(schema.toInternal)
|
||||
return rdd, schema
|
||||
|
||||
def _createFromLocal(self, data, schema):
|
||||
"""
|
||||
Create an RDD for DataFrame from a list or pandas.DataFrame, returns
|
||||
the RDD and schema.
|
||||
"""
|
||||
# make sure data could consumed multiple times
|
||||
if not isinstance(data, list):
|
||||
data = list(data)
|
||||
|
||||
if schema is None or isinstance(schema, (list, tuple)):
|
||||
struct = self._inferSchemaFromList(data, names=schema)
|
||||
converter = _create_converter(struct)
|
||||
data = map(converter, data)
|
||||
if isinstance(schema, (list, tuple)):
|
||||
for i, name in enumerate(schema):
|
||||
struct.fields[i].name = name
|
||||
struct.names[i] = name
|
||||
schema = struct
|
||||
|
||||
elif not isinstance(schema, StructType):
|
||||
raise TypeError("schema should be StructType or list or None, but got: %s" % schema)
|
||||
|
||||
# convert python objects to sql data
|
||||
data = [schema.toInternal(row) for row in data]
|
||||
return self._sc.parallelize(data), schema
|
||||
|
||||
def _get_numpy_record_dtype(self, rec):
|
||||
"""
|
||||
Used when converting a pandas.DataFrame to Spark using to_records(), this will correct
|
||||
the dtypes of fields in a record so they can be properly loaded into Spark.
|
||||
:param rec: a numpy record to check field dtypes
|
||||
:return corrected dtype for a numpy.record or None if no correction needed
|
||||
"""
|
||||
import numpy as np
|
||||
cur_dtypes = rec.dtype
|
||||
col_names = cur_dtypes.names
|
||||
record_type_list = []
|
||||
has_rec_fix = False
|
||||
for i in xrange(len(cur_dtypes)):
|
||||
curr_type = cur_dtypes[i]
|
||||
# If type is a datetime64 timestamp, convert to microseconds
|
||||
# NOTE: if dtype is datetime[ns] then np.record.tolist() will output values as longs,
|
||||
# conversion from [us] or lower will lead to py datetime objects, see SPARK-22417
|
||||
if curr_type == np.dtype('datetime64[ns]'):
|
||||
curr_type = 'datetime64[us]'
|
||||
has_rec_fix = True
|
||||
record_type_list.append((str(col_names[i]), curr_type))
|
||||
return np.dtype(record_type_list) if has_rec_fix else None
|
||||
|
||||
def _convert_from_pandas(self, pdf, schema, timezone):
|
||||
"""
|
||||
Convert a pandas.DataFrame to list of records that can be used to make a DataFrame
|
||||
:return list of records
|
||||
"""
|
||||
if timezone is not None:
|
||||
from pyspark.sql.types import _check_series_convert_timestamps_tz_local
|
||||
copied = False
|
||||
if isinstance(schema, StructType):
|
||||
for field in schema:
|
||||
# TODO: handle nested timestamps, such as ArrayType(TimestampType())?
|
||||
if isinstance(field.dataType, TimestampType):
|
||||
s = _check_series_convert_timestamps_tz_local(pdf[field.name], timezone)
|
||||
if s is not pdf[field.name]:
|
||||
if not copied:
|
||||
# Copy once if the series is modified to prevent the original
|
||||
# Pandas DataFrame from being updated
|
||||
pdf = pdf.copy()
|
||||
copied = True
|
||||
pdf[field.name] = s
|
||||
else:
|
||||
for column, series in pdf.iteritems():
|
||||
s = _check_series_convert_timestamps_tz_local(series, timezone)
|
||||
if s is not series:
|
||||
if not copied:
|
||||
# Copy once if the series is modified to prevent the original
|
||||
# Pandas DataFrame from being updated
|
||||
pdf = pdf.copy()
|
||||
copied = True
|
||||
pdf[column] = s
|
||||
|
||||
# Convert pandas.DataFrame to list of numpy records
|
||||
np_records = pdf.to_records(index=False)
|
||||
|
||||
# Check if any columns need to be fixed for Spark to infer properly
|
||||
if len(np_records) > 0:
|
||||
record_dtype = self._get_numpy_record_dtype(np_records[0])
|
||||
if record_dtype is not None:
|
||||
return [r.astype(record_dtype).tolist() for r in np_records]
|
||||
|
||||
# Convert list of numpy records to python lists
|
||||
return [r.tolist() for r in np_records]
|
||||
|
||||
def _create_from_pandas_with_arrow(self, pdf, schema, timezone):
|
||||
"""
|
||||
Create a DataFrame from a given pandas.DataFrame by slicing it into partitions, converting
|
||||
to Arrow data, then sending to the JVM to parallelize. If a schema is passed in, the
|
||||
data types will be used to coerce the data in Pandas to Arrow conversion.
|
||||
"""
|
||||
from pyspark.serializers import ArrowSerializer, _create_batch
|
||||
from pyspark.sql.types import from_arrow_schema, to_arrow_type, TimestampType
|
||||
from pyspark.sql.utils import require_minimum_pandas_version, \
|
||||
require_minimum_pyarrow_version
|
||||
|
||||
require_minimum_pandas_version()
|
||||
require_minimum_pyarrow_version()
|
||||
|
||||
from pandas.api.types import is_datetime64_dtype, is_datetime64tz_dtype
|
||||
|
||||
# Determine arrow types to coerce data when creating batches
|
||||
if isinstance(schema, StructType):
|
||||
arrow_types = [to_arrow_type(f.dataType) for f in schema.fields]
|
||||
elif isinstance(schema, DataType):
|
||||
raise ValueError("Single data type %s is not supported with Arrow" % str(schema))
|
||||
else:
|
||||
# Any timestamps must be coerced to be compatible with Spark
|
||||
arrow_types = [to_arrow_type(TimestampType())
|
||||
if is_datetime64_dtype(t) or is_datetime64tz_dtype(t) else None
|
||||
for t in pdf.dtypes]
|
||||
|
||||
# Slice the DataFrame to be batched
|
||||
step = -(-len(pdf) // self.sparkContext.defaultParallelism) # round int up
|
||||
pdf_slices = (pdf[start:start + step] for start in xrange(0, len(pdf), step))
|
||||
|
||||
# Create Arrow record batches
|
||||
batches = [_create_batch([(c, t) for (_, c), t in zip(pdf_slice.iteritems(), arrow_types)],
|
||||
timezone)
|
||||
for pdf_slice in pdf_slices]
|
||||
|
||||
# Create the Spark schema from the first Arrow batch (always at least 1 batch after slicing)
|
||||
if isinstance(schema, (list, tuple)):
|
||||
struct = from_arrow_schema(batches[0].schema)
|
||||
for i, name in enumerate(schema):
|
||||
struct.fields[i].name = name
|
||||
struct.names[i] = name
|
||||
schema = struct
|
||||
|
||||
# Create the Spark DataFrame directly from the Arrow data and schema
|
||||
jrdd = self._sc._serialize_to_jvm(batches, len(batches), ArrowSerializer())
|
||||
jdf = self._jvm.PythonSQLUtils.arrowPayloadToDataFrame(
|
||||
jrdd, schema.json(), self._wrapped._jsqlContext)
|
||||
df = DataFrame(jdf, self._wrapped)
|
||||
df._schema = schema
|
||||
return df
|
||||
|
||||
@since(2.0)
|
||||
@ignore_unicode_prefix
|
||||
def createDataFrame(self, data, schema=None, samplingRatio=None, verifySchema=True):
|
||||
"""
|
||||
Creates a :class:`DataFrame` from an :class:`RDD`, a list or a :class:`pandas.DataFrame`.
|
||||
|
||||
When ``schema`` is a list of column names, the type of each column
|
||||
will be inferred from ``data``.
|
||||
|
||||
When ``schema`` is ``None``, it will try to infer the schema (column names and types)
|
||||
from ``data``, which should be an RDD of :class:`Row`,
|
||||
or :class:`namedtuple`, or :class:`dict`.
|
||||
|
||||
When ``schema`` is :class:`pyspark.sql.types.DataType` or a datatype string, it must match
|
||||
the real data, or an exception will be thrown at runtime. If the given schema is not
|
||||
:class:`pyspark.sql.types.StructType`, it will be wrapped into a
|
||||
:class:`pyspark.sql.types.StructType` as its only field, and the field name will be "value",
|
||||
each record will also be wrapped into a tuple, which can be converted to row later.
|
||||
|
||||
If schema inference is needed, ``samplingRatio`` is used to determined the ratio of
|
||||
rows used for schema inference. The first row will be used if ``samplingRatio`` is ``None``.
|
||||
|
||||
:param data: an RDD of any kind of SQL data representation(e.g. row, tuple, int, boolean,
|
||||
etc.), or :class:`list`, or :class:`pandas.DataFrame`.
|
||||
:param schema: a :class:`pyspark.sql.types.DataType` or a datatype string or a list of
|
||||
column names, default is ``None``. The data type string format equals to
|
||||
:class:`pyspark.sql.types.DataType.simpleString`, except that top level struct type can
|
||||
omit the ``struct<>`` and atomic types use ``typeName()`` as their format, e.g. use
|
||||
``byte`` instead of ``tinyint`` for :class:`pyspark.sql.types.ByteType`. We can also use
|
||||
``int`` as a short name for ``IntegerType``.
|
||||
:param samplingRatio: the sample ratio of rows used for inferring
|
||||
:param verifySchema: verify data types of every row against schema.
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
.. versionchanged:: 2.1
|
||||
Added verifySchema.
|
||||
|
||||
>>> l = [('Alice', 1)]
|
||||
>>> spark.createDataFrame(l).collect()
|
||||
[Row(_1=u'Alice', _2=1)]
|
||||
>>> spark.createDataFrame(l, ['name', 'age']).collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> d = [{'name': 'Alice', 'age': 1}]
|
||||
>>> spark.createDataFrame(d).collect()
|
||||
[Row(age=1, name=u'Alice')]
|
||||
|
||||
>>> rdd = sc.parallelize(l)
|
||||
>>> spark.createDataFrame(rdd).collect()
|
||||
[Row(_1=u'Alice', _2=1)]
|
||||
>>> df = spark.createDataFrame(rdd, ['name', 'age'])
|
||||
>>> df.collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> from pyspark.sql import Row
|
||||
>>> Person = Row('name', 'age')
|
||||
>>> person = rdd.map(lambda r: Person(*r))
|
||||
>>> df2 = spark.createDataFrame(person)
|
||||
>>> df2.collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> from pyspark.sql.types import *
|
||||
>>> schema = StructType([
|
||||
... StructField("name", StringType(), True),
|
||||
... StructField("age", IntegerType(), True)])
|
||||
>>> df3 = spark.createDataFrame(rdd, schema)
|
||||
>>> df3.collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> spark.createDataFrame(df.toPandas()).collect() # doctest: +SKIP
|
||||
[Row(name=u'Alice', age=1)]
|
||||
>>> spark.createDataFrame(pandas.DataFrame([[1, 2]])).collect() # doctest: +SKIP
|
||||
[Row(0=1, 1=2)]
|
||||
|
||||
>>> spark.createDataFrame(rdd, "a: string, b: int").collect()
|
||||
[Row(a=u'Alice', b=1)]
|
||||
>>> rdd = rdd.map(lambda row: row[1])
|
||||
>>> spark.createDataFrame(rdd, "int").collect()
|
||||
[Row(value=1)]
|
||||
>>> spark.createDataFrame(rdd, "boolean").collect() # doctest: +IGNORE_EXCEPTION_DETAIL
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
Py4JJavaError: ...
|
||||
"""
|
||||
if isinstance(data, DataFrame):
|
||||
raise TypeError("data is already a DataFrame")
|
||||
|
||||
if isinstance(schema, basestring):
|
||||
schema = _parse_datatype_string(schema)
|
||||
elif isinstance(schema, (list, tuple)):
|
||||
# Must re-encode any unicode strings to be consistent with StructField names
|
||||
schema = [x.encode('utf-8') if not isinstance(x, str) else x for x in schema]
|
||||
|
||||
try:
|
||||
import pandas
|
||||
has_pandas = True
|
||||
except Exception:
|
||||
has_pandas = False
|
||||
if has_pandas and isinstance(data, pandas.DataFrame):
|
||||
from pyspark.sql.utils import require_minimum_pandas_version
|
||||
require_minimum_pandas_version()
|
||||
|
||||
if self.conf.get("spark.sql.execution.pandas.respectSessionTimeZone").lower() \
|
||||
== "true":
|
||||
timezone = self.conf.get("spark.sql.session.timeZone")
|
||||
else:
|
||||
timezone = None
|
||||
|
||||
# If no schema supplied by user then get the names of columns only
|
||||
if schema is None:
|
||||
schema = [str(x) if not isinstance(x, basestring) else
|
||||
(x.encode('utf-8') if not isinstance(x, str) else x)
|
||||
for x in data.columns]
|
||||
|
||||
if self.conf.get("spark.sql.execution.arrow.enabled", "false").lower() == "true" \
|
||||
and len(data) > 0:
|
||||
try:
|
||||
return self._create_from_pandas_with_arrow(data, schema, timezone)
|
||||
except Exception as e:
|
||||
warnings.warn("Arrow will not be used in createDataFrame: %s" % str(e))
|
||||
# Fallback to create DataFrame without arrow if raise some exception
|
||||
data = self._convert_from_pandas(data, schema, timezone)
|
||||
|
||||
if isinstance(schema, StructType):
|
||||
verify_func = _make_type_verifier(schema) if verifySchema else lambda _: True
|
||||
|
||||
def prepare(obj):
|
||||
verify_func(obj)
|
||||
return obj
|
||||
elif isinstance(schema, DataType):
|
||||
dataType = schema
|
||||
schema = StructType().add("value", schema)
|
||||
|
||||
verify_func = _make_type_verifier(
|
||||
dataType, name="field value") if verifySchema else lambda _: True
|
||||
|
||||
def prepare(obj):
|
||||
verify_func(obj)
|
||||
return obj,
|
||||
else:
|
||||
prepare = lambda obj: obj
|
||||
|
||||
if isinstance(data, RDD):
|
||||
rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio)
|
||||
else:
|
||||
rdd, schema = self._createFromLocal(map(prepare, data), schema)
|
||||
jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
|
||||
jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
|
||||
df = DataFrame(jdf, self._wrapped)
|
||||
df._schema = schema
|
||||
return df
|
||||
|
||||
@ignore_unicode_prefix
|
||||
@since(2.0)
|
||||
def sql(self, sqlQuery):
|
||||
"""Returns a :class:`DataFrame` representing the result of the given query.
|
||||
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
>>> df.createOrReplaceTempView("table1")
|
||||
>>> df2 = spark.sql("SELECT field1 AS f1, field2 as f2 from table1")
|
||||
>>> df2.collect()
|
||||
[Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]
|
||||
"""
|
||||
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
|
||||
|
||||
@since(2.0)
|
||||
def table(self, tableName):
|
||||
"""Returns the specified table as a :class:`DataFrame`.
|
||||
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
>>> df.createOrReplaceTempView("table1")
|
||||
>>> df2 = spark.table("table1")
|
||||
>>> sorted(df.collect()) == sorted(df2.collect())
|
||||
True
|
||||
"""
|
||||
return DataFrame(self._jsparkSession.table(tableName), self._wrapped)
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def read(self):
|
||||
"""
|
||||
Returns a :class:`DataFrameReader` that can be used to read data
|
||||
in as a :class:`DataFrame`.
|
||||
|
||||
:return: :class:`DataFrameReader`
|
||||
"""
|
||||
return DataFrameReader(self._wrapped)
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def readStream(self):
|
||||
"""
|
||||
Returns a :class:`DataStreamReader` that can be used to read data streams
|
||||
as a streaming :class:`DataFrame`.
|
||||
|
||||
.. note:: Evolving.
|
||||
|
||||
:return: :class:`DataStreamReader`
|
||||
"""
|
||||
return DataStreamReader(self._wrapped)
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def streams(self):
|
||||
"""Returns a :class:`StreamingQueryManager` that allows managing all the
|
||||
:class:`StreamingQuery` StreamingQueries active on `this` context.
|
||||
|
||||
.. note:: Evolving.
|
||||
|
||||
:return: :class:`StreamingQueryManager`
|
||||
"""
|
||||
from pyspark.sql.streaming import StreamingQueryManager
|
||||
return StreamingQueryManager(self._jsparkSession.streams())
|
||||
|
||||
@since(2.0)
|
||||
def stop(self):
|
||||
"""Stop the underlying :class:`SparkContext`.
|
||||
"""
|
||||
self._sc.stop()
|
||||
SparkSession._instantiatedSession = None
|
||||
|
||||
@since(2.0)
|
||||
def __enter__(self):
|
||||
"""
|
||||
Enable 'with SparkSession.builder.(...).getOrCreate() as session: app' syntax.
|
||||
"""
|
||||
return self
|
||||
|
||||
@since(2.0)
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
"""
|
||||
Enable 'with SparkSession.builder.(...).getOrCreate() as session: app' syntax.
|
||||
|
||||
Specifically stop the SparkSession on exit of the with block.
|
||||
"""
|
||||
self.stop()
|
||||
|
||||
|
||||
def _test():
|
||||
import os
|
||||
import doctest
|
||||
from pyspark.context import SparkContext
|
||||
from pyspark.sql import Row
|
||||
import pyspark.sql.session
|
||||
|
||||
os.chdir(os.environ["SPARK_HOME"])
|
||||
|
||||
globs = pyspark.sql.session.__dict__.copy()
|
||||
sc = SparkContext('local[4]', 'PythonTest')
|
||||
globs['sc'] = sc
|
||||
globs['spark'] = SparkSession(sc)
|
||||
globs['rdd'] = rdd = sc.parallelize(
|
||||
[Row(field1=1, field2="row1"),
|
||||
Row(field1=2, field2="row2"),
|
||||
Row(field1=3, field2="row3")])
|
||||
globs['df'] = rdd.toDF()
|
||||
(failure_count, test_count) = doctest.testmod(
|
||||
pyspark.sql.session, globs=globs,
|
||||
optionflags=doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE)
|
||||
globs['sc'].stop()
|
||||
if failure_count:
|
||||
exit(-1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
_test()
|
||||
|
||||
@ -1,872 +0,0 @@
|
||||
#
|
||||
# Licensed to the Apache Software Foundation (ASF) under one or more
|
||||
# contributor license agreements. See the NOTICE file distributed with
|
||||
# this work for additional information regarding copyright ownership.
|
||||
# The ASF licenses this file to You under the Apache License, Version 2.0
|
||||
# (the "License"); you may not use this file except in compliance with
|
||||
# the License. You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
import warnings
|
||||
from functools import reduce
|
||||
from threading import RLock
|
||||
|
||||
if sys.version >= '3':
|
||||
basestring = unicode = str
|
||||
xrange = range
|
||||
else:
|
||||
from itertools import izip as zip, imap as map
|
||||
|
||||
from pyspark import since
|
||||
from pyspark.rdd import RDD, ignore_unicode_prefix
|
||||
from pyspark.sql.conf import RuntimeConfig
|
||||
from pyspark.sql.dataframe import DataFrame
|
||||
from pyspark.sql.readwriter import DataFrameReader
|
||||
from pyspark.sql.streaming import DataStreamReader
|
||||
from pyspark.sql.types import Row, DataType, StringType, StructType, TimestampType, \
|
||||
_make_type_verifier, _infer_schema, _has_nulltype, _merge_type, _create_converter, \
|
||||
_parse_datatype_string
|
||||
from pyspark.sql.utils import install_exception_handler
|
||||
|
||||
__all__ = ["SparkSession"]
|
||||
|
||||
|
||||
def _monkey_patch_RDD(sparkSession):
|
||||
def toDF(self, schema=None, sampleRatio=None):
|
||||
"""
|
||||
Converts current :class:`RDD` into a :class:`DataFrame`
|
||||
|
||||
This is a shorthand for ``spark.createDataFrame(rdd, schema, sampleRatio)``
|
||||
|
||||
:param schema: a :class:`pyspark.sql.types.StructType` or list of names of columns
|
||||
:param samplingRatio: the sample ratio of rows used for inferring
|
||||
:return: a DataFrame
|
||||
|
||||
>>> rdd.toDF().collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
"""
|
||||
return sparkSession.createDataFrame(self, schema, sampleRatio)
|
||||
|
||||
RDD.toDF = toDF
|
||||
|
||||
|
||||
class SparkSession(object):
|
||||
"""The entry point to programming Spark with the Dataset and DataFrame API.
|
||||
|
||||
A SparkSession can be used create :class:`DataFrame`, register :class:`DataFrame` as
|
||||
tables, execute SQL over tables, cache tables, and read parquet files.
|
||||
To create a SparkSession, use the following builder pattern:
|
||||
|
||||
>>> spark = SparkSession.builder \\
|
||||
... .master("local") \\
|
||||
... .appName("Word Count") \\
|
||||
... .config("spark.some.config.option", "some-value") \\
|
||||
... .getOrCreate()
|
||||
|
||||
.. autoattribute:: builder
|
||||
:annotation:
|
||||
"""
|
||||
|
||||
class Builder(object):
|
||||
"""Builder for :class:`SparkSession`.
|
||||
"""
|
||||
|
||||
_lock = RLock()
|
||||
_options = {}
|
||||
|
||||
@since(2.0)
|
||||
def config(self, key=None, value=None, conf=None):
|
||||
"""Sets a config option. Options set using this method are automatically propagated to
|
||||
both :class:`SparkConf` and :class:`SparkSession`'s own configuration.
|
||||
|
||||
For an existing SparkConf, use `conf` parameter.
|
||||
|
||||
>>> from pyspark.conf import SparkConf
|
||||
>>> SparkSession.builder.config(conf=SparkConf())
|
||||
<pyspark.sql.session...
|
||||
|
||||
For a (key, value) pair, you can omit parameter names.
|
||||
|
||||
>>> SparkSession.builder.config("spark.some.config.option", "some-value")
|
||||
<pyspark.sql.session...
|
||||
|
||||
:param key: a key name string for configuration property
|
||||
:param value: a value for configuration property
|
||||
:param conf: an instance of :class:`SparkConf`
|
||||
"""
|
||||
with self._lock:
|
||||
if conf is None:
|
||||
self._options[key] = str(value)
|
||||
else:
|
||||
for (k, v) in conf.getAll():
|
||||
self._options[k] = v
|
||||
return self
|
||||
|
||||
@since(2.0)
|
||||
def master(self, master):
|
||||
"""Sets the Spark master URL to connect to, such as "local" to run locally, "local[4]"
|
||||
to run locally with 4 cores, or "spark://master:7077" to run on a Spark standalone
|
||||
cluster.
|
||||
|
||||
:param master: a url for spark master
|
||||
"""
|
||||
return self.config("spark.master", master)
|
||||
|
||||
@since(2.0)
|
||||
def appName(self, name):
|
||||
"""Sets a name for the application, which will be shown in the Spark web UI.
|
||||
|
||||
If no application name is set, a randomly generated name will be used.
|
||||
|
||||
:param name: an application name
|
||||
"""
|
||||
return self.config("spark.app.name", name)
|
||||
|
||||
@since(2.0)
|
||||
def enableHiveSupport(self):
|
||||
"""Enables Hive support, including connectivity to a persistent Hive metastore, support
|
||||
for Hive serdes, and Hive user-defined functions.
|
||||
"""
|
||||
return self.config("spark.sql.catalogImplementation", "hive")
|
||||
|
||||
@since(2.0)
|
||||
def getOrCreate(self):
|
||||
"""Gets an existing :class:`SparkSession` or, if there is no existing one, creates a
|
||||
new one based on the options set in this builder.
|
||||
|
||||
This method first checks whether there is a valid global default SparkSession, and if
|
||||
yes, return that one. If no valid global default SparkSession exists, the method
|
||||
creates a new SparkSession and assigns the newly created SparkSession as the global
|
||||
default.
|
||||
|
||||
>>> s1 = SparkSession.builder.config("k1", "v1").getOrCreate()
|
||||
>>> s1.conf.get("k1") == s1.sparkContext.getConf().get("k1") == "v1"
|
||||
True
|
||||
|
||||
In case an existing SparkSession is returned, the config options specified
|
||||
in this builder will be applied to the existing SparkSession.
|
||||
|
||||
>>> s2 = SparkSession.builder.config("k2", "v2").getOrCreate()
|
||||
>>> s1.conf.get("k1") == s2.conf.get("k1")
|
||||
True
|
||||
>>> s1.conf.get("k2") == s2.conf.get("k2")
|
||||
True
|
||||
"""
|
||||
with self._lock:
|
||||
from pyspark.context import SparkContext
|
||||
from pyspark.conf import SparkConf
|
||||
session = SparkSession._instantiatedSession
|
||||
if session is None or session._sc._jsc is None:
|
||||
sparkConf = SparkConf()
|
||||
for key, value in self._options.items():
|
||||
sparkConf.set(key, value)
|
||||
sc = SparkContext.getOrCreate(sparkConf)
|
||||
# This SparkContext may be an existing one.
|
||||
for key, value in self._options.items():
|
||||
# we need to propagate the confs
|
||||
# before we create the SparkSession. Otherwise, confs like
|
||||
# warehouse path and metastore url will not be set correctly (
|
||||
# these confs cannot be changed once the SparkSession is created).
|
||||
sc._conf.set(key, value)
|
||||
session = SparkSession(sc)
|
||||
for key, value in self._options.items():
|
||||
session._jsparkSession.sessionState().conf().setConfString(key, value)
|
||||
for key, value in self._options.items():
|
||||
session.sparkContext._conf.set(key, value)
|
||||
return session
|
||||
|
||||
builder = Builder()
|
||||
"""A class attribute having a :class:`Builder` to construct :class:`SparkSession` instances"""
|
||||
|
||||
_instantiatedSession = None
|
||||
|
||||
@ignore_unicode_prefix
|
||||
def __init__(self, sparkContext, jsparkSession=None):
|
||||
"""Creates a new SparkSession.
|
||||
|
||||
>>> from datetime import datetime
|
||||
>>> spark = SparkSession(sc)
|
||||
>>> allTypes = sc.parallelize([Row(i=1, s="string", d=1.0, l=1,
|
||||
... b=True, list=[1, 2, 3], dict={"s": 0}, row=Row(a=1),
|
||||
... time=datetime(2014, 8, 1, 14, 1, 5))])
|
||||
>>> df = allTypes.toDF()
|
||||
>>> df.createOrReplaceTempView("allTypes")
|
||||
>>> spark.sql('select i+1, d+1, not b, list[1], dict["s"], time, row.a '
|
||||
... 'from allTypes where b and i > 0').collect()
|
||||
[Row((i + CAST(1 AS BIGINT))=2, (d + CAST(1 AS DOUBLE))=2.0, (NOT b)=False, list[1]=2, \
|
||||
dict[s]=0, time=datetime.datetime(2014, 8, 1, 14, 1, 5), a=1)]
|
||||
>>> df.rdd.map(lambda x: (x.i, x.s, x.d, x.l, x.b, x.time, x.row.a, x.list)).collect()
|
||||
[(1, u'string', 1.0, 1, True, datetime.datetime(2014, 8, 1, 14, 1, 5), 1, [1, 2, 3])]
|
||||
"""
|
||||
from pyspark.sql.context import SQLContext
|
||||
self._sc = sparkContext
|
||||
self._jsc = self._sc._jsc
|
||||
self._jvm = self._sc._jvm
|
||||
if jsparkSession is None:
|
||||
if self._jvm.SparkSession.getDefaultSession().isDefined() \
|
||||
and not self._jvm.SparkSession.getDefaultSession().get() \
|
||||
.sparkContext().isStopped():
|
||||
jsparkSession = self._jvm.SparkSession.getDefaultSession().get()
|
||||
else:
|
||||
jsparkSession = self._jvm.SparkSession.builder().getOrCreate()
|
||||
# jsparkSession = self._jvm.SparkSession(self._jsc.sc())
|
||||
self._jsparkSession = jsparkSession
|
||||
self._jwrapped = self._jsparkSession.sqlContext()
|
||||
self._wrapped = SQLContext(self._sc, self, self._jwrapped)
|
||||
_monkey_patch_RDD(self)
|
||||
install_exception_handler()
|
||||
# If we had an instantiated SparkSession attached with a SparkContext
|
||||
# which is stopped now, we need to renew the instantiated SparkSession.
|
||||
# Otherwise, we will use invalid SparkSession when we call Builder.getOrCreate.
|
||||
if SparkSession._instantiatedSession is None \
|
||||
or SparkSession._instantiatedSession._sc._jsc is None:
|
||||
SparkSession._instantiatedSession = self
|
||||
self._jvm.SparkSession.setDefaultSession(self._jsparkSession)
|
||||
|
||||
def _repr_html_(self):
|
||||
return """
|
||||
<div>
|
||||
<p><b>SparkSession - {catalogImplementation}</b></p>
|
||||
{sc_HTML}
|
||||
</div>
|
||||
""".format(
|
||||
catalogImplementation=self.conf.get("spark.sql.catalogImplementation"),
|
||||
sc_HTML=self.sparkContext._repr_html_()
|
||||
)
|
||||
|
||||
@since(2.0)
|
||||
def newSession(self):
|
||||
"""
|
||||
Returns a new SparkSession as new session, that has separate SQLConf,
|
||||
registered temporary views and UDFs, but shared SparkContext and
|
||||
table cache.
|
||||
"""
|
||||
return self.__class__(self._sc, self._jsparkSession.newSession())
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def sparkContext(self):
|
||||
"""Returns the underlying :class:`SparkContext`."""
|
||||
return self._sc
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def version(self):
|
||||
"""The version of Spark on which this application is running."""
|
||||
return self._jsparkSession.version()
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def conf(self):
|
||||
"""Runtime configuration interface for Spark.
|
||||
|
||||
This is the interface through which the user can get and set all Spark and Hadoop
|
||||
configurations that are relevant to Spark SQL. When getting the value of a config,
|
||||
this defaults to the value set in the underlying :class:`SparkContext`, if any.
|
||||
"""
|
||||
if not hasattr(self, "_conf"):
|
||||
self._conf = RuntimeConfig(self._jsparkSession.conf())
|
||||
return self._conf
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def catalog(self):
|
||||
"""Interface through which the user may create, drop, alter or query underlying
|
||||
databases, tables, functions etc.
|
||||
|
||||
:return: :class:`Catalog`
|
||||
"""
|
||||
from pyspark.sql.catalog import Catalog
|
||||
if not hasattr(self, "_catalog"):
|
||||
self._catalog = Catalog(self)
|
||||
return self._catalog
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def udf(self):
|
||||
"""Returns a :class:`UDFRegistration` for UDF registration.
|
||||
|
||||
:return: :class:`UDFRegistration`
|
||||
"""
|
||||
from pyspark.sql.udf import UDFRegistration
|
||||
return UDFRegistration(self)
|
||||
|
||||
@since(2.0)
|
||||
def range(self, start, end=None, step=1, numPartitions=None):
|
||||
"""
|
||||
Create a :class:`DataFrame` with single :class:`pyspark.sql.types.LongType` column named
|
||||
``id``, containing elements in a range from ``start`` to ``end`` (exclusive) with
|
||||
step value ``step``.
|
||||
|
||||
:param start: the start value
|
||||
:param end: the end value (exclusive)
|
||||
:param step: the incremental step (default: 1)
|
||||
:param numPartitions: the number of partitions of the DataFrame
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
>>> spark.range(1, 7, 2).collect()
|
||||
[Row(id=1), Row(id=3), Row(id=5)]
|
||||
|
||||
If only one argument is specified, it will be used as the end value.
|
||||
|
||||
>>> spark.range(3).collect()
|
||||
[Row(id=0), Row(id=1), Row(id=2)]
|
||||
"""
|
||||
if numPartitions is None:
|
||||
numPartitions = self._sc.defaultParallelism
|
||||
|
||||
if end is None:
|
||||
jdf = self._jsparkSession.range(0, int(start), int(step), int(numPartitions))
|
||||
else:
|
||||
jdf = self._jsparkSession.range(int(start), int(end), int(step), int(numPartitions))
|
||||
|
||||
return DataFrame(jdf, self._wrapped)
|
||||
|
||||
def _inferSchemaFromList(self, data, names=None):
|
||||
"""
|
||||
Infer schema from list of Row or tuple.
|
||||
|
||||
:param data: list of Row or tuple
|
||||
:param names: list of column names
|
||||
:return: :class:`pyspark.sql.types.StructType`
|
||||
"""
|
||||
if not data:
|
||||
raise ValueError("can not infer schema from empty dataset")
|
||||
first = data[0]
|
||||
if type(first) is dict:
|
||||
warnings.warn("inferring schema from dict is deprecated,"
|
||||
"please use pyspark.sql.Row instead")
|
||||
schema = reduce(_merge_type, (_infer_schema(row, names) for row in data))
|
||||
if _has_nulltype(schema):
|
||||
raise ValueError("Some of types cannot be determined after inferring")
|
||||
return schema
|
||||
|
||||
def _inferSchema(self, rdd, samplingRatio=None, names=None):
|
||||
"""
|
||||
Infer schema from an RDD of Row or tuple.
|
||||
|
||||
:param rdd: an RDD of Row or tuple
|
||||
:param samplingRatio: sampling ratio, or no sampling (default)
|
||||
:return: :class:`pyspark.sql.types.StructType`
|
||||
"""
|
||||
first = rdd.first()
|
||||
if not first:
|
||||
raise ValueError("The first row in RDD is empty, "
|
||||
"can not infer schema")
|
||||
if type(first) is dict:
|
||||
warnings.warn("Using RDD of dict to inferSchema is deprecated. "
|
||||
"Use pyspark.sql.Row instead")
|
||||
|
||||
if samplingRatio is None:
|
||||
schema = _infer_schema(first, names=names)
|
||||
if _has_nulltype(schema):
|
||||
for row in rdd.take(100)[1:]:
|
||||
schema = _merge_type(schema, _infer_schema(row, names=names))
|
||||
if not _has_nulltype(schema):
|
||||
break
|
||||
else:
|
||||
raise ValueError("Some of types cannot be determined by the "
|
||||
"first 100 rows, please try again with sampling")
|
||||
else:
|
||||
if samplingRatio < 0.99:
|
||||
rdd = rdd.sample(False, float(samplingRatio))
|
||||
schema = rdd.map(lambda row: _infer_schema(row, names)).reduce(_merge_type)
|
||||
return schema
|
||||
|
||||
def _createFromRDD(self, rdd, schema, samplingRatio):
|
||||
"""
|
||||
Create an RDD for DataFrame from an existing RDD, returns the RDD and schema.
|
||||
"""
|
||||
if schema is None or isinstance(schema, (list, tuple)):
|
||||
struct = self._inferSchema(rdd, samplingRatio, names=schema)
|
||||
converter = _create_converter(struct)
|
||||
rdd = rdd.map(converter)
|
||||
if isinstance(schema, (list, tuple)):
|
||||
for i, name in enumerate(schema):
|
||||
struct.fields[i].name = name
|
||||
struct.names[i] = name
|
||||
schema = struct
|
||||
|
||||
elif not isinstance(schema, StructType):
|
||||
raise TypeError("schema should be StructType or list or None, but got: %s" % schema)
|
||||
|
||||
# convert python objects to sql data
|
||||
rdd = rdd.map(schema.toInternal)
|
||||
return rdd, schema
|
||||
|
||||
def _createFromLocal(self, data, schema):
|
||||
"""
|
||||
Create an RDD for DataFrame from a list or pandas.DataFrame, returns
|
||||
the RDD and schema.
|
||||
"""
|
||||
# make sure data could consumed multiple times
|
||||
if not isinstance(data, list):
|
||||
data = list(data)
|
||||
|
||||
if schema is None or isinstance(schema, (list, tuple)):
|
||||
struct = self._inferSchemaFromList(data, names=schema)
|
||||
converter = _create_converter(struct)
|
||||
data = map(converter, data)
|
||||
if isinstance(schema, (list, tuple)):
|
||||
for i, name in enumerate(schema):
|
||||
struct.fields[i].name = name
|
||||
struct.names[i] = name
|
||||
schema = struct
|
||||
|
||||
elif not isinstance(schema, StructType):
|
||||
raise TypeError("schema should be StructType or list or None, but got: %s" % schema)
|
||||
|
||||
# convert python objects to sql data
|
||||
data = [schema.toInternal(row) for row in data]
|
||||
return self._sc.parallelize(data), schema
|
||||
|
||||
def _get_numpy_record_dtype(self, rec):
|
||||
"""
|
||||
Used when converting a pandas.DataFrame to Spark using to_records(), this will correct
|
||||
the dtypes of fields in a record so they can be properly loaded into Spark.
|
||||
:param rec: a numpy record to check field dtypes
|
||||
:return corrected dtype for a numpy.record or None if no correction needed
|
||||
"""
|
||||
import numpy as np
|
||||
cur_dtypes = rec.dtype
|
||||
col_names = cur_dtypes.names
|
||||
record_type_list = []
|
||||
has_rec_fix = False
|
||||
for i in xrange(len(cur_dtypes)):
|
||||
curr_type = cur_dtypes[i]
|
||||
# If type is a datetime64 timestamp, convert to microseconds
|
||||
# NOTE: if dtype is datetime[ns] then np.record.tolist() will output values as longs,
|
||||
# conversion from [us] or lower will lead to py datetime objects, see SPARK-22417
|
||||
if curr_type == np.dtype('datetime64[ns]'):
|
||||
curr_type = 'datetime64[us]'
|
||||
has_rec_fix = True
|
||||
record_type_list.append((str(col_names[i]), curr_type))
|
||||
return np.dtype(record_type_list) if has_rec_fix else None
|
||||
|
||||
def _convert_from_pandas(self, pdf, schema, timezone):
|
||||
"""
|
||||
Convert a pandas.DataFrame to list of records that can be used to make a DataFrame
|
||||
:return list of records
|
||||
"""
|
||||
if timezone is not None:
|
||||
from pyspark.sql.types import _check_series_convert_timestamps_tz_local
|
||||
copied = False
|
||||
if isinstance(schema, StructType):
|
||||
for field in schema:
|
||||
# TODO: handle nested timestamps, such as ArrayType(TimestampType())?
|
||||
if isinstance(field.dataType, TimestampType):
|
||||
s = _check_series_convert_timestamps_tz_local(pdf[field.name], timezone)
|
||||
if s is not pdf[field.name]:
|
||||
if not copied:
|
||||
# Copy once if the series is modified to prevent the original
|
||||
# Pandas DataFrame from being updated
|
||||
pdf = pdf.copy()
|
||||
copied = True
|
||||
pdf[field.name] = s
|
||||
else:
|
||||
for column, series in pdf.iteritems():
|
||||
s = _check_series_convert_timestamps_tz_local(series, timezone)
|
||||
if s is not series:
|
||||
if not copied:
|
||||
# Copy once if the series is modified to prevent the original
|
||||
# Pandas DataFrame from being updated
|
||||
pdf = pdf.copy()
|
||||
copied = True
|
||||
pdf[column] = s
|
||||
|
||||
# Convert pandas.DataFrame to list of numpy records
|
||||
np_records = pdf.to_records(index=False)
|
||||
|
||||
# Check if any columns need to be fixed for Spark to infer properly
|
||||
if len(np_records) > 0:
|
||||
record_dtype = self._get_numpy_record_dtype(np_records[0])
|
||||
if record_dtype is not None:
|
||||
return [r.astype(record_dtype).tolist() for r in np_records]
|
||||
|
||||
# Convert list of numpy records to python lists
|
||||
return [r.tolist() for r in np_records]
|
||||
|
||||
def _create_from_pandas_with_arrow(self, pdf, schema, timezone):
|
||||
"""
|
||||
Create a DataFrame from a given pandas.DataFrame by slicing it into partitions, converting
|
||||
to Arrow data, then sending to the JVM to parallelize. If a schema is passed in, the
|
||||
data types will be used to coerce the data in Pandas to Arrow conversion.
|
||||
"""
|
||||
from pyspark.serializers import ArrowStreamSerializer, _create_batch
|
||||
from pyspark.sql.types import from_arrow_schema, to_arrow_type, TimestampType
|
||||
from pyspark.sql.utils import require_minimum_pandas_version, \
|
||||
require_minimum_pyarrow_version
|
||||
|
||||
require_minimum_pandas_version()
|
||||
require_minimum_pyarrow_version()
|
||||
|
||||
from pandas.api.types import is_datetime64_dtype, is_datetime64tz_dtype
|
||||
|
||||
# Determine arrow types to coerce data when creating batches
|
||||
if isinstance(schema, StructType):
|
||||
arrow_types = [to_arrow_type(f.dataType) for f in schema.fields]
|
||||
elif isinstance(schema, DataType):
|
||||
raise ValueError("Single data type %s is not supported with Arrow" % str(schema))
|
||||
else:
|
||||
# Any timestamps must be coerced to be compatible with Spark
|
||||
arrow_types = [to_arrow_type(TimestampType())
|
||||
if is_datetime64_dtype(t) or is_datetime64tz_dtype(t) else None
|
||||
for t in pdf.dtypes]
|
||||
|
||||
# Slice the DataFrame to be batched
|
||||
step = -(-len(pdf) // self.sparkContext.defaultParallelism) # round int up
|
||||
pdf_slices = (pdf[start:start + step] for start in xrange(0, len(pdf), step))
|
||||
|
||||
# Create Arrow record batches
|
||||
batches = [_create_batch([(c, t) for (_, c), t in zip(pdf_slice.iteritems(), arrow_types)],
|
||||
timezone)
|
||||
for pdf_slice in pdf_slices]
|
||||
|
||||
# Create the Spark schema from the first Arrow batch (always at least 1 batch after slicing)
|
||||
if isinstance(schema, (list, tuple)):
|
||||
struct = from_arrow_schema(batches[0].schema)
|
||||
for i, name in enumerate(schema):
|
||||
struct.fields[i].name = name
|
||||
struct.names[i] = name
|
||||
schema = struct
|
||||
|
||||
jsqlContext = self._wrapped._jsqlContext
|
||||
|
||||
def reader_func(temp_filename):
|
||||
return self._jvm.PythonSQLUtils.readArrowStreamFromFile(jsqlContext, temp_filename)
|
||||
|
||||
def create_RDD_server():
|
||||
return self._jvm.ArrowRDDServer(jsqlContext)
|
||||
|
||||
# Create Spark DataFrame from Arrow stream file, using one batch per partition
|
||||
jrdd = self._sc._serialize_to_jvm(batches, ArrowStreamSerializer(), reader_func,
|
||||
create_RDD_server)
|
||||
jdf = self._jvm.PythonSQLUtils.toDataFrame(jrdd, schema.json(), jsqlContext)
|
||||
df = DataFrame(jdf, self._wrapped)
|
||||
df._schema = schema
|
||||
return df
|
||||
|
||||
@staticmethod
|
||||
def _create_shell_session():
|
||||
"""
|
||||
Initialize a SparkSession for a pyspark shell session. This is called from shell.py
|
||||
to make error handling simpler without needing to declare local variables in that
|
||||
script, which would expose those to users.
|
||||
"""
|
||||
import py4j
|
||||
from pyspark.conf import SparkConf
|
||||
from pyspark.context import SparkContext
|
||||
try:
|
||||
# Try to access HiveConf, it will raise exception if Hive is not added
|
||||
conf = SparkConf()
|
||||
if conf.get('spark.sql.catalogImplementation', 'hive').lower() == 'hive':
|
||||
SparkContext._jvm.org.apache.hadoop.hive.conf.HiveConf()
|
||||
return SparkSession.builder\
|
||||
.enableHiveSupport()\
|
||||
.getOrCreate()
|
||||
else:
|
||||
return SparkSession.builder.getOrCreate()
|
||||
except (py4j.protocol.Py4JError, TypeError):
|
||||
if conf.get('spark.sql.catalogImplementation', '').lower() == 'hive':
|
||||
warnings.warn("Fall back to non-hive support because failing to access HiveConf, "
|
||||
"please make sure you build spark with hive")
|
||||
|
||||
return SparkSession.builder.getOrCreate()
|
||||
|
||||
@since(2.0)
|
||||
@ignore_unicode_prefix
|
||||
def createDataFrame(self, data, schema=None, samplingRatio=None, verifySchema=True):
|
||||
"""
|
||||
Creates a :class:`DataFrame` from an :class:`RDD`, a list or a :class:`pandas.DataFrame`.
|
||||
|
||||
When ``schema`` is a list of column names, the type of each column
|
||||
will be inferred from ``data``.
|
||||
|
||||
When ``schema`` is ``None``, it will try to infer the schema (column names and types)
|
||||
from ``data``, which should be an RDD of :class:`Row`,
|
||||
or :class:`namedtuple`, or :class:`dict`.
|
||||
|
||||
When ``schema`` is :class:`pyspark.sql.types.DataType` or a datatype string, it must match
|
||||
the real data, or an exception will be thrown at runtime. If the given schema is not
|
||||
:class:`pyspark.sql.types.StructType`, it will be wrapped into a
|
||||
:class:`pyspark.sql.types.StructType` as its only field, and the field name will be "value",
|
||||
each record will also be wrapped into a tuple, which can be converted to row later.
|
||||
|
||||
If schema inference is needed, ``samplingRatio`` is used to determined the ratio of
|
||||
rows used for schema inference. The first row will be used if ``samplingRatio`` is ``None``.
|
||||
|
||||
:param data: an RDD of any kind of SQL data representation(e.g. row, tuple, int, boolean,
|
||||
etc.), or :class:`list`, or :class:`pandas.DataFrame`.
|
||||
:param schema: a :class:`pyspark.sql.types.DataType` or a datatype string or a list of
|
||||
column names, default is ``None``. The data type string format equals to
|
||||
:class:`pyspark.sql.types.DataType.simpleString`, except that top level struct type can
|
||||
omit the ``struct<>`` and atomic types use ``typeName()`` as their format, e.g. use
|
||||
``byte`` instead of ``tinyint`` for :class:`pyspark.sql.types.ByteType`. We can also use
|
||||
``int`` as a short name for ``IntegerType``.
|
||||
:param samplingRatio: the sample ratio of rows used for inferring
|
||||
:param verifySchema: verify data types of every row against schema.
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
.. versionchanged:: 2.1
|
||||
Added verifySchema.
|
||||
|
||||
.. note:: Usage with spark.sql.execution.arrow.enabled=True is experimental.
|
||||
|
||||
>>> l = [('Alice', 1)]
|
||||
>>> spark.createDataFrame(l).collect()
|
||||
[Row(_1=u'Alice', _2=1)]
|
||||
>>> spark.createDataFrame(l, ['name', 'age']).collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> d = [{'name': 'Alice', 'age': 1}]
|
||||
>>> spark.createDataFrame(d).collect()
|
||||
[Row(age=1, name=u'Alice')]
|
||||
|
||||
>>> rdd = sc.parallelize(l)
|
||||
>>> spark.createDataFrame(rdd).collect()
|
||||
[Row(_1=u'Alice', _2=1)]
|
||||
>>> df = spark.createDataFrame(rdd, ['name', 'age'])
|
||||
>>> df.collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> from pyspark.sql import Row
|
||||
>>> Person = Row('name', 'age')
|
||||
>>> person = rdd.map(lambda r: Person(*r))
|
||||
>>> df2 = spark.createDataFrame(person)
|
||||
>>> df2.collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> from pyspark.sql.types import *
|
||||
>>> schema = StructType([
|
||||
... StructField("name", StringType(), True),
|
||||
... StructField("age", IntegerType(), True)])
|
||||
>>> df3 = spark.createDataFrame(rdd, schema)
|
||||
>>> df3.collect()
|
||||
[Row(name=u'Alice', age=1)]
|
||||
|
||||
>>> spark.createDataFrame(df.toPandas()).collect() # doctest: +SKIP
|
||||
[Row(name=u'Alice', age=1)]
|
||||
>>> spark.createDataFrame(pandas.DataFrame([[1, 2]])).collect() # doctest: +SKIP
|
||||
[Row(0=1, 1=2)]
|
||||
|
||||
>>> spark.createDataFrame(rdd, "a: string, b: int").collect()
|
||||
[Row(a=u'Alice', b=1)]
|
||||
>>> rdd = rdd.map(lambda row: row[1])
|
||||
>>> spark.createDataFrame(rdd, "int").collect()
|
||||
[Row(value=1)]
|
||||
>>> spark.createDataFrame(rdd, "boolean").collect() # doctest: +IGNORE_EXCEPTION_DETAIL
|
||||
Traceback (most recent call last):
|
||||
...
|
||||
Py4JJavaError: ...
|
||||
"""
|
||||
if isinstance(data, DataFrame):
|
||||
raise TypeError("data is already a DataFrame")
|
||||
|
||||
if isinstance(schema, basestring):
|
||||
schema = _parse_datatype_string(schema)
|
||||
elif isinstance(schema, (list, tuple)):
|
||||
# Must re-encode any unicode strings to be consistent with StructField names
|
||||
schema = [x.encode('utf-8') if not isinstance(x, str) else x for x in schema]
|
||||
|
||||
try:
|
||||
import pandas
|
||||
has_pandas = True
|
||||
except Exception:
|
||||
has_pandas = False
|
||||
if has_pandas and isinstance(data, pandas.DataFrame):
|
||||
from pyspark.sql.utils import require_minimum_pandas_version
|
||||
require_minimum_pandas_version()
|
||||
|
||||
if self._wrapped._conf.pandasRespectSessionTimeZone():
|
||||
timezone = self._wrapped._conf.sessionLocalTimeZone()
|
||||
else:
|
||||
timezone = None
|
||||
|
||||
# If no schema supplied by user then get the names of columns only
|
||||
if schema is None:
|
||||
schema = [str(x) if not isinstance(x, basestring) else
|
||||
(x.encode('utf-8') if not isinstance(x, str) else x)
|
||||
for x in data.columns]
|
||||
|
||||
if self._wrapped._conf.arrowEnabled() and len(data) > 0:
|
||||
try:
|
||||
return self._create_from_pandas_with_arrow(data, schema, timezone)
|
||||
except Exception as e:
|
||||
from pyspark.util import _exception_message
|
||||
|
||||
if self._wrapped._conf.arrowFallbackEnabled():
|
||||
msg = (
|
||||
"createDataFrame attempted Arrow optimization because "
|
||||
"'spark.sql.execution.arrow.enabled' is set to true; however, "
|
||||
"failed by the reason below:\n %s\n"
|
||||
"Attempting non-optimization as "
|
||||
"'spark.sql.execution.arrow.fallback.enabled' is set to "
|
||||
"true." % _exception_message(e))
|
||||
warnings.warn(msg)
|
||||
else:
|
||||
msg = (
|
||||
"createDataFrame attempted Arrow optimization because "
|
||||
"'spark.sql.execution.arrow.enabled' is set to true, but has reached "
|
||||
"the error below and will not continue because automatic fallback "
|
||||
"with 'spark.sql.execution.arrow.fallback.enabled' has been set to "
|
||||
"false.\n %s" % _exception_message(e))
|
||||
warnings.warn(msg)
|
||||
raise
|
||||
data = self._convert_from_pandas(data, schema, timezone)
|
||||
|
||||
if isinstance(schema, StructType):
|
||||
verify_func = _make_type_verifier(schema) if verifySchema else lambda _: True
|
||||
|
||||
def prepare(obj):
|
||||
verify_func(obj)
|
||||
return obj
|
||||
elif isinstance(schema, DataType):
|
||||
dataType = schema
|
||||
schema = StructType().add("value", schema)
|
||||
|
||||
verify_func = _make_type_verifier(
|
||||
dataType, name="field value") if verifySchema else lambda _: True
|
||||
|
||||
def prepare(obj):
|
||||
verify_func(obj)
|
||||
return obj,
|
||||
else:
|
||||
prepare = lambda obj: obj
|
||||
|
||||
if isinstance(data, RDD):
|
||||
rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio)
|
||||
else:
|
||||
rdd, schema = self._createFromLocal(map(prepare, data), schema)
|
||||
jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
|
||||
jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
|
||||
df = DataFrame(jdf, self._wrapped)
|
||||
df._schema = schema
|
||||
return df
|
||||
|
||||
@ignore_unicode_prefix
|
||||
@since(2.0)
|
||||
def sql(self, sqlQuery):
|
||||
"""Returns a :class:`DataFrame` representing the result of the given query.
|
||||
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
>>> df.createOrReplaceTempView("table1")
|
||||
>>> df2 = spark.sql("SELECT field1 AS f1, field2 as f2 from table1")
|
||||
>>> df2.collect()
|
||||
[Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')]
|
||||
"""
|
||||
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
|
||||
|
||||
@since(2.0)
|
||||
def table(self, tableName):
|
||||
"""Returns the specified table as a :class:`DataFrame`.
|
||||
|
||||
:return: :class:`DataFrame`
|
||||
|
||||
>>> df.createOrReplaceTempView("table1")
|
||||
>>> df2 = spark.table("table1")
|
||||
>>> sorted(df.collect()) == sorted(df2.collect())
|
||||
True
|
||||
"""
|
||||
return DataFrame(self._jsparkSession.table(tableName), self._wrapped)
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def read(self):
|
||||
"""
|
||||
Returns a :class:`DataFrameReader` that can be used to read data
|
||||
in as a :class:`DataFrame`.
|
||||
|
||||
:return: :class:`DataFrameReader`
|
||||
"""
|
||||
return DataFrameReader(self._wrapped)
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def readStream(self):
|
||||
"""
|
||||
Returns a :class:`DataStreamReader` that can be used to read data streams
|
||||
as a streaming :class:`DataFrame`.
|
||||
|
||||
.. note:: Evolving.
|
||||
|
||||
:return: :class:`DataStreamReader`
|
||||
"""
|
||||
return DataStreamReader(self._wrapped)
|
||||
|
||||
@property
|
||||
@since(2.0)
|
||||
def streams(self):
|
||||
"""Returns a :class:`StreamingQueryManager` that allows managing all the
|
||||
:class:`StreamingQuery` StreamingQueries active on `this` context.
|
||||
|
||||
.. note:: Evolving.
|
||||
|
||||
:return: :class:`StreamingQueryManager`
|
||||
"""
|
||||
from pyspark.sql.streaming import StreamingQueryManager
|
||||
return StreamingQueryManager(self._jsparkSession.streams())
|
||||
|
||||
@since(2.0)
|
||||
def stop(self):
|
||||
"""Stop the underlying :class:`SparkContext`.
|
||||
"""
|
||||
self._sc.stop()
|
||||
# We should clean the default session up. See SPARK-23228.
|
||||
self._jvm.SparkSession.clearDefaultSession()
|
||||
SparkSession._instantiatedSession = None
|
||||
|
||||
@since(2.0)
|
||||
def __enter__(self):
|
||||
"""
|
||||
Enable 'with SparkSession.builder.(...).getOrCreate() as session: app' syntax.
|
||||
"""
|
||||
return self
|
||||
|
||||
@since(2.0)
|
||||
def __exit__(self, exc_type, exc_val, exc_tb):
|
||||
"""
|
||||
Enable 'with SparkSession.builder.(...).getOrCreate() as session: app' syntax.
|
||||
|
||||
Specifically stop the SparkSession on exit of the with block.
|
||||
"""
|
||||
self.stop()
|
||||
|
||||
|
||||
def _test():
|
||||
import os
|
||||
import doctest
|
||||
from pyspark.context import SparkContext
|
||||
from pyspark.sql import Row
|
||||
import pyspark.sql.session
|
||||
|
||||
os.chdir(os.environ["SPARK_HOME"])
|
||||
|
||||
globs = pyspark.sql.session.__dict__.copy()
|
||||
sc = SparkContext('local[4]', 'PythonTest')
|
||||
globs['sc'] = sc
|
||||
globs['spark'] = SparkSession(sc)
|
||||
globs['rdd'] = rdd = sc.parallelize(
|
||||
[Row(field1=1, field2="row1"),
|
||||
Row(field1=2, field2="row2"),
|
||||
Row(field1=3, field2="row3")])
|
||||
globs['df'] = rdd.toDF()
|
||||
(failure_count, test_count) = doctest.testmod(
|
||||
pyspark.sql.session, globs=globs,
|
||||
optionflags=doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE)
|
||||
globs['sc'].stop()
|
||||
if failure_count:
|
||||
sys.exit(-1)
|
||||
|
||||
if __name__ == "__main__":
|
||||
_test()
|
||||
@ -1,5 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
cd /opt/spark/data/tispark-sample-data
|
||||
|
||||
mysql -h tidb -P 4000 -u root < dss.ddl
|
||||
@ -1,9 +0,0 @@
|
||||
from pyspark.sql import SparkSession
|
||||
|
||||
spark = SparkSession.builder.master("spark://tispark-master:7077").appName("TiSpark tests").getOrCreate()
|
||||
|
||||
spark.sql("use TPCH_001")
|
||||
|
||||
count = spark.sql("select count(*) as c from lineitem").first()['c']
|
||||
|
||||
assert 60175 == count
|
||||
@ -1,190 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2018 PingCAP, Inc.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
#* --------------------------------------------------------------------- */
|
||||
#* log configure */
|
||||
#* --------------------------------------------------------------------- */
|
||||
### Define logging color
|
||||
COLOR_ORIGIN="\033[0m"
|
||||
COLOR_GREEN="\033[32m"
|
||||
COLOR_YELLOW="\033[33m"
|
||||
COLOR_RED="\033[31m"
|
||||
|
||||
### Define logging level
|
||||
LOGGER_LEVEL="3"
|
||||
|
||||
### Define common logger
|
||||
function logger() {
|
||||
cur_level=$1
|
||||
cur_type=$2
|
||||
cur_color=$3
|
||||
shift && shift && shift
|
||||
cur_msg=$*
|
||||
|
||||
[[ ${LOGGER_LEVEL} -lt ${cur_level} ]] && return 0
|
||||
|
||||
pre_fix="${cur_color}[${cur_type}][$(date +%F)][$(date +%T)]"
|
||||
pos_fix="${COLOR_ORIGIN}"
|
||||
echo -e "${pre_fix} ${cur_msg} ${pos_fix}"
|
||||
}
|
||||
|
||||
### Define notice logger
|
||||
function notice() {
|
||||
logger 3 "NOTICE" ${COLOR_GREEN} $*
|
||||
}
|
||||
|
||||
### Define warning logger
|
||||
function warning() {
|
||||
logger 2 "WARNING" ${COLOR_YELLOW} $*
|
||||
}
|
||||
|
||||
### Define fatal logger
|
||||
function fatal() {
|
||||
logger 1 "FATAL" ${COLOR_RED} $*
|
||||
exit 1
|
||||
}
|
||||
##########################################################################
|
||||
|
||||
function print_help() {
|
||||
echo "\
|
||||
${1:-Debug tool for container.}
|
||||
|
||||
Usage:
|
||||
container_debug [OPTIONS] [ARG]
|
||||
|
||||
Options:
|
||||
-i The container's identity, possible values are 'containerID' or 'containerName'
|
||||
-s The service name defined in docker-compose
|
||||
-w Run pprof via a web interface for go program
|
||||
-p The binary path of the debugged process in its own container
|
||||
-h Print help infomation
|
||||
|
||||
When you enter the debug container, you can find the pid of the debugged process through the ps command,
|
||||
then you can find the binary of the debugged process through this path /proc/\${pid}/root/\${binary_path}.
|
||||
|
||||
\${binary_path} represents the binary path of the debugged process in its own container.
|
||||
\${pid} represents the process id of the debugged process as seen in the debug container.
|
||||
" >&2
|
||||
exit
|
||||
}
|
||||
|
||||
###############################variable define##################################
|
||||
WORKSPACE=$(cd $(dirname $0)/..; pwd)
|
||||
DEBUG_IMAGE=${DEBUG_IMAGE:-uhub.service.ucloud.cn/pingcap/tidb-debug:latest}
|
||||
SUFFIX=$(uuidgen|cut -d'-' -f1|tr '[A-Z]' '[a-z]')
|
||||
DEBUG_CONTAINER_NAME=debug-${SUFFIX}
|
||||
TMP_FILE=$(mktemp /tmp/binary.XXXXXX)
|
||||
################################################################################
|
||||
|
||||
if [[ $# -eq 0 ]]
|
||||
then
|
||||
print_help
|
||||
fi
|
||||
|
||||
function cleanup() {
|
||||
notice "start to clean tmp file ${TMP_FILE}"
|
||||
[[ -f ${TMP_FILE} ]] && rm -f ${TMP_FILE}
|
||||
}
|
||||
|
||||
### register signal processing function
|
||||
trap cleanup EXIT
|
||||
|
||||
### change workspace
|
||||
cd $WORKSPACE
|
||||
|
||||
optstring=":i:s:p:wh"
|
||||
|
||||
while getopts "$optstring" opt; do
|
||||
case $opt in
|
||||
i)
|
||||
container_id=${OPTARG}
|
||||
;;
|
||||
s)
|
||||
service_name=${OPTARG}
|
||||
;;
|
||||
p)
|
||||
binary_path=${OPTARG}
|
||||
;;
|
||||
w)
|
||||
web=true
|
||||
;;
|
||||
h)
|
||||
print_help
|
||||
;;
|
||||
\?)
|
||||
fatal "Invalid option: -$OPTARG" >&2
|
||||
;;
|
||||
:)
|
||||
fatal "Option -$OPTARG requires an argument" >&2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z ${service_name} && -z ${container_id} ]]
|
||||
then
|
||||
fatal "please use -s or -i options to select the target container" >&2
|
||||
elif [[ ! -z ${container_id} ]]
|
||||
then
|
||||
### If both -s and -i options are specified, the -i option is preferred
|
||||
cid=${container_id}
|
||||
else
|
||||
cprefix=$(basename $(pwd)|tr -Cd '[A-Za-z0-9]'|tr '[A-Z]' '[a-z]')
|
||||
cid="${cprefix}_${service_name}_1"
|
||||
docker ps | grep ${cid} >/dev/null
|
||||
[[ $? -ne 0 ]] && fatal "not found docker-compose service ${service_name}, please confirm the correct docker-compose service name" >&2
|
||||
fi
|
||||
|
||||
if [[ ! -z ${binary_path} ]]
|
||||
then
|
||||
binary_name=$(basename ${binary_path})
|
||||
docker cp ${cid}:${binary_path} ${TMP_FILE}
|
||||
if [[ $? -ne 0 ]]
|
||||
then
|
||||
### not found binary in container, reset variable ${binary_name}
|
||||
binary_name=
|
||||
warning "not found ${binary_path} in container ${cid}, please specify the correct binary path in container" >&2
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ ! -z ${web} ]]
|
||||
then
|
||||
### starts a web server for graphic visualizations of golang program profiles
|
||||
|
||||
### generate a random web port
|
||||
### TODO: Test whether this port has been used
|
||||
wport=${RANDOM}
|
||||
[[ ${wport} -lt 10000 ]] && wport=$((wport+10000))
|
||||
|
||||
### get the container exposed port
|
||||
cport=$(docker port ${cid}|grep -E '[0-9]{5}'|awk -F: '{print $NF}')
|
||||
notice "starts a web server on localhost:${wport}"
|
||||
pprof -http=:${wport} ${TMP_FILE} http://localhost:${cport}/debug/pprof/profile
|
||||
else
|
||||
### enter debug container to debug the specified container
|
||||
docker_run_args=(-ti --rm --name=${DEBUG_CONTAINER_NAME})
|
||||
docker_run_args+=(--pid=container:${cid})
|
||||
docker_run_args+=(--network=container:${cid})
|
||||
docker_run_args+=(--ipc=container:${cid})
|
||||
docker_run_args+=(--cap-add=SYS_PTRACE)
|
||||
docker_run_args+=(--privileged=true)
|
||||
if [[ ! -z ${binary_name} && -e ${TMP_FILE} ]]
|
||||
then
|
||||
docker_run_args+=(-v ${TMP_FILE}:/${binary_name})
|
||||
else
|
||||
notice "you can access the debugged container ${cid} file system through this path /proc/\${DEBUGGED_PROCESS_PID}/root"
|
||||
fi
|
||||
docker_run_args+=($DEBUG_IMAGE)
|
||||
docker run ${docker_run_args[@]}
|
||||
fi
|
||||
@ -1 +1 @@
|
||||
Subproject commit bc3945da740d9590cffc68e0a821e6d622e1c7bd
|
||||
Subproject commit 6ee8d23ac480e37e9d1802cd7bab6f3884c0734d
|
||||
Loading…
Reference in New Issue