From e8411a554830b4c5f59c98025ccd748ddfee1870 Mon Sep 17 00:00:00 2001 From: Ranjith R Date: Wed, 30 Sep 2020 11:03:29 +0530 Subject: [PATCH] Update README.md - Update Kudo based Cassandra on OpenEBS instruction Signed-off-by: Ranjith R --- k8s/demo/cassandra/README.md | 528 ++++++++++------------------------- 1 file changed, 150 insertions(+), 378 deletions(-) diff --git a/k8s/demo/cassandra/README.md b/k8s/demo/cassandra/README.md index c10d5eaf86..ef2e3ffce0 100644 --- a/k8s/demo/cassandra/README.md +++ b/k8s/demo/cassandra/README.md @@ -1,419 +1,191 @@ # Running Cassandra with OpenEBS -This tutorial provides detailed instructions to run a Cassandra Statefulset with OpenEBS storage and perform -some simple database operations to verify successful deployment. +This tutorial provides detailed instructions to run a Kudo operator based Cassandra Statefulset with OpenEBS storage and perform +some simple database operations to verify successful deployment and it's performance benchmark. -## Cassandra +## Introudction -Apache Cassandra is a free and open-source distributed NoSQL database management system designed to handle -large amounts of data across nodes, providing high availability with no single point of failure. It uses -asynchronous masterless replication allowing low latency operations for all clients. +Apache Cassandra is a free and open-source distributed NoSQL database management system designed to handle large amounts of data across nodes, providing high availability with no single point of failure. It uses asynchronous masterless replication allowing low latency operations for all clients. -## Prerequisite +OpenEBS is the most popular Open Source Container Attached Solution available for Kubernetes and is favored by many organizations for its simplicity and ease of management and its highly flexible deployment options to meet the storage needs of any given stateful application. -A fully configured (preferably, multi-node) Kubernetes cluster configured with the OpenEBS operator and OpenEBS -storage classes. +Depending on the performance and high availability requirements of Cassandra, you can select to run Cassandra with the following deployment options: +For optimal performance deploy Cassandra with OpenEBS Local PV. +If you would like to use storage layer capabilities like high availability, snapshots, incremental backups and so forth, you can select OpenEBS cStor. -``` -test@Master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -maya-apiserver-3416621614-g6tmq 1/1 Running 1 8d -openebs-provisioner-4230626287-503dv 1/1 Running 1 8d -``` - -## Deploy the Cassandra Statefulset with OpenEBS storage - -The statefulset specification YAMLs are available at OpenEBS/k8s/demo/cassandra. - -The number of replicas in the Statefulset can be modified as required. This example uses 2 replicas. - -``` -apiVersion: apps/v1beta1 -kind: StatefulSet -metadata: - name: cassandra - labels: - app: cassandra -spec: - serviceName: cassandra - replicas: 2 - selector: - matchLabels: - app: cassandra - template: - metadata: - labels: - app: cassandra -: -``` - -Execute the following commands: - -``` -test@Master:~$ cd openebs/k8s/demo/cassandra - -test@Master:~/openebs/k8s/demo/cassandra$ ls -ltr -total 8 --rw-rw-r-- 1 test test 165 Oct 30 12:19 cassandra-service.yaml --rw-rw-r-- 1 test test 2382 Nov 11 14:09 cassandra-statefulset.yaml -``` +Whether you use OpenEBS Local PV or cStor, you can set up the Kubernetes cluster with all its nodes in a single availability zone/data center or spread across multiple zones/ data centers. -``` -test@Master:~/openebs/k8s/demo/cassandra$ kubectl apply -f cassandra-service.yaml -service "cassandra" configured -``` -``` -test@Master:~/openebs/k8s/demo/cassandra$ kubectl apply -f cassandra-statefulset.yaml -statefulset "cassandra" created -``` +## Configuration workflow -Verify that all the OpenEBS persistent volumes are created, the Cassandra headless service and replicas -are running: +1. Install OpenEBS +2. Select OpenEBS storage engine +3. Configure OpenEBS LocalPV StorageClass +4. Install Kudo operator to install Cassandra +5. Install Kudo based Cassandra +6. Verify Cassandra is up and running +7. Testing Cassandra Performance on OpenEBS -``` -test@Master:~/openebs/k8s/demo/cassandra$ kubectl get pods -NAME READY STATUS RESTARTS AGE -cassandra-0 1/1 Running 0 4h -cassandra-1 1/1 Running 0 4h -maya-apiserver-3416621614-g6tmq 1/1 Running 1 8d -openebs-provisioner-4230626287-503dv 1/1 Running 1 8d -pvc-1c16536c-c6bc-11e7-a0eb-000c298ff5fc-ctrl-599202565-2kdff 1/1 Running 0 4h -pvc-1c16536c-c6bc-11e7-a0eb-000c298ff5fc-rep-3068892500-22ccd 1/1 Running 0 4h -pvc-1c16536c-c6bc-11e7-a0eb-000c298ff5fc-rep-3068892500-lhwdw 1/1 Running 0 4h -pvc-e7d18817-c6bb-11e7-a0eb-000c298ff5fc-ctrl-1103031005-8vv82 1/1 Running 0 4h -pvc-e7d18817-c6bb-11e7-a0eb-000c298ff5fc-rep-3006965094-cntx5 1/1 Running 0 4h -pvc-e7d18817-c6bb-11e7-a0eb-000c298ff5fc-rep-3006965094-mhsjt 1/1 Running 0 4h -``` +### Install OpenEBS -``` -test@Master:~/openebs/k8s/demo/cassandra$ kubectl get svc -NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE -cassandra None 9042/TCP 5h -kubernetes 10.96.0.1 443/TCP 14d -maya-apiserver-service 10.102.92.217 5656/TCP 14d -pvc-1c16536c-c6bc-11e7-a0eb-000c298ff5fc-ctrl-svc 10.107.177.156 3260/TCP,9501/TCP 4h -pvc-e7d18817-c6bb-11e7-a0eb-000c298ff5fc-ctrl-svc 10.108.47.234 3260/TCP,9501/TCP 4h -``` - -Note: It may take some time for the pods to start as the images must be pulled and instantiated. This is also -dependent on the network speed. - -## Verify successful Cassandra Deployment +If OpenEBS is not installed in your K8s cluster, this can be done from [here](https://docs.openebs.io/docs/next/overview.html). If OpenEBS is already installed, go to the next step. -The verification procedure can be carried out in a series of steps, starting from listing the functional -replicas to by creating and deleting test data in the Cassandra database. +### Select OpenEBS storage engine -### Step-1: Install the Cqlsh Utility +A storage engine is the data plane component of the IO path of a persistent volume. In CAS architecture, users can choose different data planes for different application workloads based on a configuration policy. OpenEBS provides different types of storage engines and choose the right engine that suit for your type of application requirements and storage available on your Kubernetes nodes. More information can be read from [here](https://docs.openebs.io/docs/next/overview.html#openebs-storage-engines). -Cqlsh is a Python based utility that enables you to execute Cassandra Query Language (CQL). CQL is a -declarative language that enables users to query Cassandra using semantics similar to SQL. +### Configure OpenEBS LocalPV StorageClass -Install the python-minimal and python-pip apt packages (if not available) and perform a pip install of -Csqlsh. - -``` -sudo apt-get install -y python-minimal python-pip -pip install cqlsh -``` - -Note: Installing Csqlsh may take a few minutes (typically, the cassandra-driver package takes time to download -and setup). - -### Step-2: Verify Replica Status on Cassandra - -``` -test@Master:~$ kubectl exec cassandra-0 -- nodetool status -Datacenter: DC1-K8Demo -====================== -Status=Up/Down -|/ State=Normal/Leaving/Joining/Moving --- Address Load Tokens Owns (effective) Host ID Rack -UN 10.36.0.6 103.83 KiB 32 100.0% e013c19d-9c6f-49cd-838e-c69eb310f88e Rack1-K8Demo -UN 10.44.0.3 83.1 KiB 32 100.0% 1d2e3b79-4b0b-4bf9-b435-fcfa8be8a603 Rack1-K8Demo -``` - -A status of "UN" implies Up and Normal. The "Owns" column suggests the data distribution percentage for the -content placed into the Cassandra keyspaces. In the current example, we have chosen a replica count of 2 due to -which the data is evenly distributed and copies maintained. - -### Step-3: Create a Test Keyspace with Tables +- `openebs-hostpath` - Using this option, it will create Kubernetes Persistent Volumes that will store the data into OS host path directory at: /var/openebs//. Select this option, if you don’t have any additional block devices attached to Kubernetes nodes. You would like to customize the directory where data will be saved, create a new OpenEBS LocalPV storage class using these instructions. + +- `openebs-device` - Using this option will create Kubernetes Local PVs using the block devices attached to the node. Select this option when you want to dedicate a complete block device on a node to a Cassandra node. You can customize which devices will be discovered and managed by OpenEBS using the instructions here. -- Identify the IP address of any of the Cassandra replicas, for example, Cassandra-0. This is available from the -output of the nodetool status command executed in the previous step. +### Install Kudo operator to install Cassandra -- Login to the CQL shell using the Cqlsh utility. +- Make the environment to install Kudo operator using the following steps. ``` - test@Master:~$ cqlsh 10.44.0.3 9042 --cqlversion="3.4.2" - Connected to K8Demo at 10.44.0.3:9042. - [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] - Use HELP for help. - - cqlsh> + export GOROOT=/usr/local/go + export GOPATH=$HOME/gopath + export PATH=$GOPATH/bin:$GOROOT/bin:$PATH ``` - -- Create a keyspace with replication factor 2. - +- Choose the Kudo version. The latest version can be found [here](https://github.com/kudobuilder/kudo/releases). In the following command, selected Kudo version is v0.14.0. ``` - cqlsh> create keyspace hardware with replication = { 'class' : 'SimpleStrategy' , 'replication_factor' : 2 }; - - cqlsh> describe keyspaces; - - system_schema system_auth system hardware system_distributed system_traces + VERSION=0.14.0 + OS=$(uname | tr '[:upper:]' '[:lower:]') + ARCH=$(uname -m) + wget -O kubectl-kudo https://github.com/kudobuilder/kudo/releases/download/v${VERSION}/kubectl-kudo_${VERSION}_${OS}_${ARCH} ``` - -- Create a table with test content and view the data. - +- Change the permission ``` - cqlsh> use hardware; - - cqlsh:hardware> create table inventory (id uuid,Name text,HWtype text,Model text,PRIMARY KEY ((id), Name)); - - cqlsh:hardware> insert into inventory (id, Name, HWType, Model) values (5132b130-ae79-11e4-ab27-0800200c9a66, 'TestBox', 'Server', 'DellR820'); - - cqlsh:hardware> select * from inventory; - - id | name | hwtype | model - ---------------------------------------+---------+--------+---------- - 5132b130-ae79-11e4-ab27-0800200c9a66 | TestBox | Server | DellR820 - - (1 rows) + chmod +x kubectl-kudo + sudo mv kubectl-kudo /usr/local/bin/kubectl-kudo ``` - -- Flush the data to ensure it is written to disk from the memtable (memory). - +- Install Cert-manager + Before installing the KUDO operator, the cert-manager must be already installed in your cluster. If not, install the cert-manager. The instruction can be found from [here](https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests). Since our K8s version is v1.16.0, we have installed cert-manager using the following command. ``` - test@Master:$ kubectl exec cassandra-0 -- nodetool flush hardware + kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml ``` - -### Step-4: Delete the test keyspace - -- Verify the masterless nature of Cassandra Statefulset by deleting the keyspace from another replica, -in this example, Cassandra-1. - +- Install Kudo operator using specified version. In the following command, selected version is v0.14.0. ``` - test@Master:~$ cqlsh 10.36.0.6 9042 --cqlversion="3.4.2" - - cqlsh> use hardware; - cqlsh:hardware> select * from Inventory; - - id | name | hwtype | model - --------------------------------------+---------+--------+---------- - 5132b130-ae79-11e4-ab27-0800200c9a66 | TestBox | Server | DellR820 - - (1 rows) - - cqlsh> drop keyspace hardware; + kubectl-kudo init --version 0.14.0 ``` - -- Verify successful deletion of keyspace. - + Verify Kudo controller pods status ``` - cqlsh> describe keyspaces - - system_traces system_schema system_auth system system_distributed + kubectl get pod -n kudo-system + NAME READY STATUS RESTARTS AGE + kudo-controller-manager-0 1/1 Running 0 2m40s ``` -## Scale the Cassandra Statefulset - -The Cassandra Statefulset can be scaled depending on resource availability using the *kubectl scale statefulset* command. - +### Install Kudo operator based Cassandra + +Install Kudo based Cassandra using OpenEBS storage engine. In this example, the storage class used is `openebs-device`. Before deploying Cassandra,make sure there are enough block devices that can be used to consume Cassandra application, by running `kubectl get bd -n openebs`. + ``` -test@Master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -cassandra-0 1/1 Running 1 1d -maya-apiserver-3416621614-8q6k9 1/1 Running 1 1d -openebs-provisioner-4230626287-p8g1n 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-ctrl-1165089859-rpd6p 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-rep-3111921848-cqzw4 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-rep-3111921848-p1f2b 1/1 Running 1 1d - -test@Master:~$ kubectl get statefulset -NAME DESIRED CURRENT AGE -cassandra 1 1 1d - -test@Master:~$ kubectl scale statefulset cassandra --replicas=2 -statefulset "cassandra" scaled - -test@Master:~$ kubectl get pods -NAME READY STATUS RESTARTS AGE -cassandra-0 1/1 Running 1 1d -cassandra-1 0/1 ContainerCreating 0 4s -maya-apiserver-3416621614-8q6k9 1/1 Running 1 1d -openebs-provisioner-4230626287-p8g1n 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-ctrl-1165089859-rpd6p 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-rep-3111921848-cqzw4 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-rep-3111921848-p1f2b 1/1 Running 1 1d -pvc-f84a8133-e647-11e7-bc35-000c298ff5fc-ctrl-2160660239-l9bkk 1/1 Running 0 4s -pvc-f84a8133-e647-11e7-bc35-000c298ff5fc-rep-3359561965-6bcr1 1/1 Running 0 4s -pvc-f84a8133-e647-11e7-bc35-000c298ff5fc-rep-3359561965-b2ctt 1/1 Running 0 4s -``` - -Verify that a new OpeneBS persistent volume (PV), i.e., ctrl/replica pods are automatically created upon scaling the -application replicas. - -``` -test@Master:~$ kubectl get pvc -NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE -cassandra-data-cassandra-0 Bound pvc-8910e033-e56b-11e7-8f29-000c298ff5fc 5G RWO openebs-cassandra 1d -cassandra-data-cassandra-1 Bound pvc-f84a8133-e647-11e7-bc35-000c298ff5fc 5G RWO openebs-cassandra 3m +export instance_name=cassandra-openebs +export namespace_name=cassandra +kubectl create ns cassandra +kubectl kudo install cassandra --namespace=$namespace_name --instance $instance_name -p NODE_STORAGE_CLASS=openebs-device ``` - -## Testing Cassandra Performance on OpenEBS - -Performance tests on OpenEBS can be run using the Cassandra-loadgen Kubernetes job (cassandra-loadgen.yaml). Follow the steps -shown below. - -- In the loadgen job specification yaml, replace the workload details (for details on supported workloads, refer https://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsCStress_t.html) -``` ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: cassandra-loadgen -spec: - template: - metadata: - name: cassandra-loadgen - spec: - restartPolicy: Never - containers: - - name: cassandra-loadgen - image: cassandra - command: ["/bin/bash"] - args: ["-c", "cassandra-stress write duration=5m no-warmup -node cassandra-0.cassandra"] - tty: true +### Verify Cassandra is up and running + +- Get the Cassandra Pods,StatefulSet,Service and PVC details ``` + kubectl get pod,service,sts,pvc -n cassandra Should show that Statefulset is deployed with 3 Cassandra pods in running state and a headless service is configured. + NAME READY STATUS RESTARTS AGE + cassandra-openebs-node-0 2/2 Running 0 4m + cassandra-openebs-node-1 2/2 Running 0 3m2s + cassandra-openebs-node-2 2/2 Running 0 3m24s -- Run the Cassandra loadgen Kubernetes job using *kubectl apply* command. + NAME READY AGE + statefulset.apps/cassandra 3/3 6m35s -``` -test@Master:~/openebs/k8s/demo/cassandra$ kubectl apply -f cassandra-loadgen.yaml -job "cassandra-loadgen" created + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + service/cassandra ClusterIP None 7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP 6m35s -test@Master:~/openebs/k8s/demo/cassandra$ kubectl get pods -NAME READY STATUS RESTARTS AGE -cassandra-0 1/1 Running 1 1d -cassandra-1 1/1 Running 0 23m -cassandra-loadgen-mhwnt 0/1 ContainerCreating 0 5s -maya-apiserver-3416621614-8q6k9 1/1 Running 1 1d -openebs-provisioner-4230626287-p8g1n 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-ctrl-1165089859-rpd6p 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-rep-3111921848-cqzw4 1/1 Running 1 1d -pvc-8910e033-e56b-11e7-8f29-000c298ff5fc-rep-3111921848-p1f2b 1/1 Running 1 1d -pvc-f84a8133-e647-11e7-bc35-000c298ff5fc-ctrl-2160660239-l9bkk 1/1 Running 0 23m -pvc-f84a8133-e647-11e7-bc35-000c298ff5fc-rep-3359561965-6bcr1 1/1 Running 0 23m -pvc-f84a8133-e647-11e7-bc35-000c298ff5fc-rep-3359561965-b2ctt 1/1 Running 0 23m -``` -- Verify that the stress tool has started running I/O using *kubectl logs* command. - -``` -test@Master:~/openebs/k8s/demo/cassandra$ kubectl logs -f cassandra-loadgen-mhwnt -******************** Stress Settings ******************** -Command: - Type: write - Count: -1 - Duration: 5 MINUTES - No Warmup: true - Consistency Level: LOCAL_ONE - Target Uncertainty: not applicable - Key Size (bytes): 10 - Counter Increment Distribution: add=fixed(1) -Rate: - Auto: true - Min Threads: 4 - Max Threads: 1000 -Population: - Sequence: 1..1000000 - Order: ARBITRARY - Wrap: true -Insert: - Revisits: Uniform: min=1,max=1000000 - Visits: Fixed: key=1 - Row Population Ratio: Ratio: divisor=1.000000;delegate=Fixed: key=1 - Batch Type: not batching -Columns: - Max Columns Per Key: 5 - Column Names: [C0, C1, C2, C3, C4] - Comparator: AsciiType - Timestamp: null - Variable Column Count: false - Slice: false - Size Distribution: Fixed: key=34 - Count Distribution: Fixed: key=5 -Errors: - Ignore: false - Tries: 10 -Log: - No Summary: false - No Settings: false - File: null - Interval Millis: 1000 - Level: NORMAL -Mode: - API: JAVA_DRIVER_NATIVE - Connection Style: CQL_PREPARED - CQL Version: CQL3 - Protocol Version: V4 - Username: null - Password: null - Auth Provide Class: null - Max Pending Per Connection: 128 - Connections Per Host: 8 - Compression: NONE -Node: - Nodes: [10.47.0.4] - Is White List: false - Datacenter: null -Schema: - Keyspace: keyspace1 - Replication Strategy: org.apache.cassandra.locator.SimpleStrategy - Replication Strategy Options: {replication_factor=1} - Table Compression: null - Table Compaction Strategy: null - Table Compaction Strategy Options: {} -Transport: - factory=org.apache.cassandra.thrift.TFramedTransportFactory; truststore=null; truststore-password=null; keystore=null; keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA; -Port: - Native Port: 9042 - Thrift Port: 9160 - JMX Port: 7199 -Send To Daemon: - *not set* -Graph: - File: null - Revision: unknown - Title: null - Operation: WRITE -TokenRange: - Wrap: false - Split Factor: 1 + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + var-lib-cassandra-cassandra-openebs-node-0 Bound pvc-213f2cfb-231f-4f14-be93-69c3d1c6d5d7 20Gi RWO openebs-device 20m + var-lib-cassandra-cassandra-openebs-node-1 Bound pvc-059bf24b-3546-43f3-aa01-3a6bea640ffd 20Gi RWO openebs-device 19m + var-lib-cassandra-cassandra-openebs-node-2 Bound pvc-82367756-7a19-4f7f-9e35-65e7696f3b86 20Gi RWO openebs-device 18m + ``` +- Login to one of the Cassandra pod to verify the Cassandra cluster status using the following command. + ``` + kubectl exec -it cassandra-openebs-node-0 -n cassandra -- nodetool status + Datacenter: datacenter1 + ======================= + Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack + UN 192.168.30.24 94.21 KiB 256 63.0% 73c54856-f045-48db-b0db-e6a751d005f8 rack1 + UN 192.168.93.31 75.12 KiB 256 65.3% d48c61b7-551b-4805-b8cc-b915d039f298 rack1 + UN 192.168.56.80 75 KiB 256 71.7% 91fc4107-e447-4605-8cbf-3916f9fd8abf rack1 + ``` +- Create a Test Keyspace with Tables. Login to one of the Cassandra pod and run the following commands from a cassandra pod. + ``` + $cqlsh ..svc.cluster.local + ``` + Example command: + ``` + $cqlsh cassandra-openebs-svc.cassandra.svc.cluster.local + Warning: Cannot create directory at `/home/cassandra/.cassandra`. Command history will not be saved. -Connected to cluster: K8Demo, max pending requests per connection 128, max connections per host 8 -Datatacenter: DC1-K8Demo; Host: /10.44.0.9; Rack: Rack1-K8Demo -Datatacenter: DC1-K8Demo; Host: /10.47.0.4; Rack: Rack1-K8Demo -Created keyspaces. Sleeping 1s for propagation. -Sleeping 2s... -Thread count was not specified + Connected to cassandra-openebs at cassandra-openebs-svc.cassandra.svc.cluster.local:9042. + [cqlsh 5.0.1 | Cassandra 3.11.5 | CQL spec 3.4.4 | Native protocol v4] + Use HELP for help. + cqlsh> + ``` +- Creating a Keyspace. Now, let’s create a Keyspace and add a table with some entries into it. + ``` + $ cassandra@cqlsh> create keyspace dev + ... with replication = {'class':'SimpleStrategy','replication_factor':1}; + +- Creating Data Objects + ``` + $cassandra@cqlsh> use dev; + cassandra@cqlsh:dev> create table emp (empid int primary key, + ... emp_first varchar, emp_last varchar, emp_dept varchar); + +- Inserting and Querying Data + ``` + $cassandra@cqlsh:dev> insert into emp (empid, emp_first, emp_last, emp_dept) + ... values (1,'fred','smith','eng'); -Running with 4 threadCount -Running WRITE with 4 threads 5 minutes -Failed to connect over JMX; not collecting these stats -type total ops, op/s, pk/s, row/s, mean, med, .95, .99, .999, max, time, stderr, errors, gc: #, max ms, sum ms, sdv ms, mb -total, 30, 30, 30, 30, 73.3, 33.3, 266.3, 273.2, 273.2, 273.2, 1.0, 0.00000, 0, 0, 0, 0, 0, 0 -total, 164, 134, 134, 134, 31.1, 14.7, 91.9, 177.6, 197.7, 197.7, 2.0, 0.42686, 0, 0, 0, 0, 0, 0 -total, 379, 215, 215, 215, 18.3, 10.9, 55.1, 64.5, 72.9, 72.9, 3.0, 0.31137, 0, 0, 0, 0, 0, 0 -total, 558, 179, 179, 179, 22.4, 11.6, 67.3, 95.9, 104.0, 104.0, 4.0, 0.22588, 0, 0, 0, 0, 0, 0 -total, 762, 204, 204, 204, 19.1, 7.8, 73.1, 112.1, 114.0, 114.0, 5.0, 0.18113, 0, 0, 0, 0, 0, 0 -total, 835, 73, 73, 73, 54.5, 68.4, 113.1, 126.7, 133.6, 133.6, 6.0, 0.18614, 0, 0, 0, 0, 0, 0 -total, 907, 72, 72, 72, 55.3, 10.6, 115.5, 194.1, 200.5, 200.5, 7.0, 0.18075, 0, 0, 0, 0, 0, 0 -total, 996, 89, 89, 89, 44.8, 11.3, 101.8, 108.8, 109.3, 109.3, 8.0, 0.16982, 0, 0, 0, 0, 0, 0 -total, 1066, 70, 70, 70, 57.3, 89.3, 109.6, 114.2, 115.3, 115.3, 9.0, 0.16630, 0, 0, 0, 0, 0, 0 -total, 1130, 64, 64, 64, 62.0, 88.8, 110.6, 111.4, 111.9, 111.9, 10.0, 0.16387, 0, 0, 0, 0, 0, 0 -total, 1195, 65, 65, 65, 63.3, 91.2, 120.9, 132.8, 133.6, 133.6, 11.0, 0.15948, 0, 0, 0, 0, 0, 0 -total, 1273, 78, 78, 78, 49.8, 72.4, 103.7, 115.9, 116.1, 116.1, 12.0, 0.15172, 0, 0, 0, 0, 0, 0 -total, 1354, 81, 81, 81, 49.6, 8.3, 101.8, 102.6, 102.9, 102.9, 13.0, 0.14419, 0, 0, 0, 0, 0, 0 -total, 1426, 72, 72, 72, 55.2, 88.1, 103.6, 109.2, 110.1, 110.1, 14.0, 0.13889, 0, 0, 0, 0, 0, 0 -``` + $cassandra@cqlsh:dev> select * from emp; + empid | emp_dept | emp_first | emp_last + -------+----------+-----------+---------- + 1 | eng | fred | smith + (1 rows) +- Updating a data + ``` + $cassandra@cqlsh:dev> update emp set emp_dept = 'fin' where empid = 1; + $cassandra@cqlsh:dev> select * from emp; + empid | emp_dept | emp_first | emp_last + -------+----------+-----------+---------- + 1 | fin | fred | smith + (1 rows) + ``` + +### Testing Cassandra Performance on OpenEBS + +- Login to one of the cassandra pod and run the following sample loadgen command to write and read some entry to and from the database. + ``` + $kubectl exec -it cassandra-openebs-node-0 bash -n cassandra + ``` +- Get the database health status + ``` + $nodetool status + Datacenter: datacenter1 + ======================= + Status=Up/Down + |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack + UN 192.168.52.94 135.39 MiB 256 32.6% 68206664-b1e7-4e73-9677-14119536e42d rack1 + UN 192.168.7.79 189.98 MiB 256 36.3% 5f6176f5-c47f-4d12-bd16-c9427baf68a0 rack1 + UN 192.168.70.87 127.46 MiB 256 31.2% da31ba66-42dd-4c85-a212-a0cb828bbefb rack1 + ``` +- Run Write load + ``` + cassandra@cassandra-openebs-node-0:/opt/cassandra/tools/bin$ ./cassandra-stress write n=1000000 -rate threads=50 -node 192.168.52.94 + ``` +- Run Read Load + ``` + cassandra@cassandra-openebs-node-0:/opt/cassandra/tools/bin$ ./cassandra-stress read n=200000 -rate threads=50 -node 192.168.52.94 + ```