Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update links of blogs and docs within blogs #257

Merged
merged 9 commits into from
Sep 2, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Update docs links within blogs
Signed-off-by: Pallavi-PH <pallaviph02@gmail.com>
  • Loading branch information
Pallavi-PH committed Aug 31, 2021
commit e8f54829a91a603700e7cbe0617a335048f45c1d
4 changes: 2 additions & 2 deletions website/src/blogs/atlassian-jira-deployment-on-openebs.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ excerpt: Learn how to deploy Atlassian Jira on OpenEBS in this short post.

#### Install OpenEBS

If OpenEBS is not installed in your K8s cluster, this can be done from [here](https://docs.openebs.io/docs/next/installation.html). If OpenEBS is already installed, go to the next step.
If OpenEBS is not installed in your K8s cluster, this can be done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.

#### Configure cStor Pool

If cStor Pool is not configured in your OpenEBS cluster, this can be done from [here](https://docs.openebs.io/docs/next/ugcstor.html#creating-cStor-storage-pools). Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided:
If cStor Pool is not configured in your OpenEBS cluster, this can be done from [here](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools). Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided:

```
#Use the following YAMLs to create a cStor Storage Pool.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ OpenEBS uses a minimum of three replicas to run OpenEBS clusters with high avail
### Quick Steps to Set Up OpenEBS on GKE

- Set up a three-node GKE cluster with local disks by enabling the Cluster Autoscaler feature.
- Install OpenEBS on Kubernetes Nodes. This should be simple, and a couple of methods are discussed at the beginning of our docs, using either a Helm Chart or directly from Kubectl. More details are mentioned in the [OpenEBS documentation](https://docs.openebs.io/).
- Install OpenEBS on Kubernetes Nodes. This should be simple, and a couple of methods are discussed at the beginning of our docs, using either a Helm Chart or directly from Kubectl. More details are mentioned in the [OpenEBS documentation](/docs).
- Use OpenEBS Storage Classes to create Persistent Volumes for your stateful applications.

## Detailed Explanation of OpenEBS 0.7 Cluster Deployment on GKE across AZs and Rebuilding of PVs.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ So at a high level, to allow OpenEBS to run in privileged mode in SELinux=on nod

Here are the steps I have followed:

****Step 1: Setup appropriate security context for OpenEBS****
**Step 1: Setup appropriate security context for OpenEBS**

**On OpenShift Clusters:** Select the right SCC for OpenEBS

Expand Down Expand Up @@ -67,7 +67,7 @@ Then apply the YAML file

kubectl apply -f openebs-privileged-psp.yaml

****Step 2: Install OpenEBS****
**Step 2: Install OpenEBS**

Download the latest version of `openebs-operator.yaml` file.

Expand Down Expand Up @@ -124,22 +124,22 @@ Install OpenEBS

**Note: If you are using helm to install openebs, you will need to apply the above change after it has been installed. In a future release of the helm chart, I will work on making this configurable parameter.**

****Step 3: (Optional) Create a new cStor Pool.****
**Step 3: (Optional) Create a new cStor Pool.**

You can skip this step if using the default cStor Sparse pool.

****Step 3a****: Verify all pods are working and cStor Pools are running
**Step 3a**: Verify all pods are working and cStor Pools are running

![List of all pods in openebs namespace after installation](/images/blog/pod-lists.png)

****Step 3b****: Verify that disks available on the nodes are discovered.
**Step 3b**: Verify that disks available on the nodes are discovered.

kubectl get disks


![Disks detected by NDM, along with sparse disks](/images/blog/ndm-detected-disks.png)

****Step 3c****: Create a storage pool claim using the instructions at [https://docs.openebs.io/docs/next/configurepools.html](https://docs.openebs.io/docs/next/configurepools.html)
**Step 3c**: Create a storage pool claim using the instructions at [https://docs.openebs.io/docs/next/configurepools.html](https://docs.openebs.io/docs/next/configurepools.html)

Create a `cstor-pool-config.yaml` as mentioned in the docs.

Expand Down Expand Up @@ -171,9 +171,9 @@ Apply this file `kubectl apply -f cstor-pool-config.yaml`

![3 cStor pool pods will be running](/images/blog/cstor-pool.png)

****Step 3d****: Create a new storage class using SPC as `cstor-pool1` or edit the default storage class to use the newly created SPC. I have edited the already available default storage class.
**Step 3d**: Create a new storage class using SPC as `cstor-pool1` or edit the default storage class to use the newly created SPC. I have edited the already available default storage class.

****Step 4: Running Percona Application****
**Step 4: Running Percona Application**

wget https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/percona/percona-openebs-cstor-sparse-deployment.yaml

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ I had the fortune of [presenting](https://www.slideshare.net/OpenEBS/openebs-cas

## Just a quick recap on CAS:

[Container Attached Storage (CAS)](https://docs.openebs.io/docs/next/conceptscas.html) is a new storage architecture to run the entire storage software in containers and hence in user space. This architecture has many benefits, primary one being “a dedicated storage controller per application” and bring in the possibility of hardening the storage controller for a given application workload. Read more on the benefits at the [CNCF blog](https://www.cncf.io/blog/2018/04/19/container-attached-storage-a-primer/). A typical CAS architecture example is shown below.
[Container Attached Storage (CAS)](/docs/concepts/cas) is a new storage architecture to run the entire storage software in containers and hence in user space. This architecture has many benefits, primary one being “a dedicated storage controller per application” and bring in the possibility of hardening the storage controller for a given application workload. Read more on the benefits at the [CNCF blog](https://www.cncf.io/blog/2018/04/19/container-attached-storage-a-primer/). A typical CAS architecture example is shown below.

![CAS architecture with controller and replica pods for each application](https://cdn-images-1.medium.com/max/800/1*4dJDmPbxxrP-fZK7NZZmYg.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Pre-requisites:

- Beginner knowledge of Kubernetes
- Basic knowledge of GO Language
- Basic understanding of how OpenEBS functions. You can get started with OpenEBS [docs](https://docs.openebs.io/?__hstc=216392137.a9b75e72cb4b227999b631a7d9fb75d2.1579850476359.1579850476359.1579850476359.1&amp;__hssc=216392137.1.1579850476359&amp;__hsfp=3765904294) and the OpenEBS [white paper](https://www.openebs.io/assets/docs/WP-OpenEBS-0_7.pdf?__hstc=216392137.a9b75e72cb4b227999b631a7d9fb75d2.1579850476359.1579850476359.1579850476359.1&amp;__hssc=216392137.1.1579850476359&amp;__hsfp=3765904294)
- Basic understanding of how OpenEBS functions. You can get started with OpenEBS [docs](/docs?__hstc=216392137.a9b75e72cb4b227999b631a7d9fb75d2.1579850476359.1579850476359.1579850476359.1&amp;__hssc=216392137.1.1579850476359&amp;__hsfp=3765904294) and the OpenEBS white paper

## Setting up the Development Environment for mayactl

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ Note: — This step is not required if you are using the OpenEBS version 0.9 whi

### Step 4:

Configuration of storage pool, storage class and PVC are like any other platform and the steps are outlined in [https://docs.openebs.io](https://docs.openebs.io/?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294)
Configuration of storage pool, storage class and PVC are like any other platform and the steps are outlined in [https://openebs.io/docs](/docs?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294)

Pool Configuration — [https://docs.openebs.io/docs/next/configurepools.html#manual-mode](https://docs.openebs.io/docs/next/configurepools.html?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294#manual-mode)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,11 @@ Here are a few of the advantages of using OpenEBS in conjunction with a Yugabyte

- There’s no need to manage the local disks as OpenEBS manages them.
- OpenEBS and YugabyteDB can provision large size persistent volumes.
- With OpenEBS persistent volumes, capacity can be thin provisioned, and disks can be added to OpenEBS on the fly without disruption of service. When this capability is combined with YugabyteDB, which already supports multi-TB data density per node, this can prove to be[ massive cost savings on storage.](https://docs.openebs.io/features.html#reduced-storage-tco-upto-50)
- Both OpenEBS and YugabyteDB support multi-cloud deployments [helping organizations avoid cloud lock-in.](https://docs.openebs.io/docs/next/features.html#truely-cloud-native-storage-for-kubernetes)
- Both OpenEBS and YugabyteDB integrate with another CNCF project, [Prometheus](https://prometheus.io/). This makes it easy to [monitor both storage and the database](https://docs.openebs.io/docs/next/features.html#prometheus-metrics-for-workload-tuning) from a single system.
- With OpenEBS persistent volumes, capacity can be thin provisioned, and disks can be added to OpenEBS on the fly without disruption of service. When this capability is combined with YugabyteDB, which already supports multi-TB data density per node, this can prove to be massive cost savings on storage.
- Both OpenEBS and YugabyteDB support multi-cloud deployments helping organizations avoid cloud lock-in.
- Both OpenEBS and YugabyteDB integrate with another CNCF project, [Prometheus](https://prometheus.io/). This makes it easy to [monitor both storage and the database](/docs/introduction/features#prometheus-metrics-for-workload-tuning) from a single system.

Additionally, OpenEBS can do [synchronous replication](https://docs.openebs.io/docs/next/features.html#synchronous-replication) inside a geographic region. In a scenario where YugabyteDB is deployed across regions, and a node in any one region fails, YugaByteDB would have to rebuild this node with data from another region. This would incur cross-region traffic, which is more expensive and lower in performance. But, with OpenEBS, this rebuilding of a node can be done seamlessly because OpenEBS is replicating locally inside the region. This means YugabyteDB does not end up having to copy data from another region, which ends up being less expensive and higher in performance. In this deployment setup, only if the entire region failed, YugabyteDB would need to do a cross-region node rebuild. Additional detailed descriptions of OpenEBS enabled use cases can be found [here.](https://docs.openebs.io/docs/next/usecases.html)
Additionally, OpenEBS can do [synchronous replication](/docs/introduction/features#synchronous-replication) inside a geographic region. In a scenario where YugabyteDB is deployed across regions, and a node in any one region fails, YugaByteDB would have to rebuild this node with data from another region. This would incur cross-region traffic, which is more expensive and lower in performance. But, with OpenEBS, this rebuilding of a node can be done seamlessly because OpenEBS is replicating locally inside the region. This means YugabyteDB does not end up having to copy data from another region, which ends up being less expensive and higher in performance. In this deployment setup, only if the entire region failed, YugabyteDB would need to do a cross-region node rebuild. Additional detailed descriptions of OpenEBS enabled use cases can be found [here](/docs/introduction/usecases).

Ok, let’s get started!

Expand Down Expand Up @@ -258,7 +258,7 @@ That’s it! You now have a 3 node YugabyteDB cluster running on GKE with OpenEB
**Next Steps**
As mentioned, MayData is the chief sponsor of the OpenEBS project. It offers an enterprise-grade OpenEBS platform that makes it easier to run stateful applications on Kubernetes by helping get your workloads provisioned, backed-up, monitored, logged, managed, tested, and even migrated across clusters and clouds. You can learn more about MayaData [here.](https://mayadata.io/)

- Learn more about OpenEBS by visiting the [GitHub](https://github.com/openebs/openebs) and [official Docs](https://docs.openebs.io/) pages.
- Learn more about OpenEBS by visiting the [GitHub](https://github.com/openebs/openebs) and [official Docs](/docs) pages.
- Learn more about YugabyteDB by visiting the [GitHub](https://github.com/yugabyte/yugabyte-db) and [official Docs](https://docs.yugabyte.com/) pages.

**About the author:**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ OpenEBS uses a Declarative Data Plane to manage storage operations which aligns

### Configuring a Dynamic localPV for ECK

The StorageClass spec for [OpenEBS LocalPV](https://docs.openebs.io/docs/next/uglocalpv.html) for automatically choosing an available disk on the node and mounting that disk with ext4 volume would look like the following:
The StorageClass spec for [OpenEBS LocalPV](/docs/concepts/localpv) for automatically choosing an available disk on the node and mounting that disk with ext4 volume would look like the following:

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1alpha1
Expand All @@ -86,7 +86,7 @@ The StorageClass spec for [OpenEBS LocalPV](https://docs.openebs.io/docs/next/ug
storageClassName: OpenEBS-LocalPV
EOF

The StorageClass spec for [OpenEBS LocalPV](https://docs.openebs.io/docs/next/uglocalpv.html) for automatically choosing an available disk on the node and mounting that disk with ext4 volume would look like the following:
The StorageClass spec for [OpenEBS LocalPV](/docs/concepts/localpv) for automatically choosing an available disk on the node and mounting that disk with ext4 volume would look like the following:

apiVersion: storage.k8s.io/v1
kind: StorageClass
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ Check if all the cluster components are configured successfully and all the pods

#### **Install OpenEBS**

OpenEBS is a CNCF project delivering persistent block storage to the workloads deployed in Kubernetes.[cStor](https://docs.openebs.io/docs/next/cstor.html?__hstc=216392137.6a5433d986ca5a9bb31cbcea3a03df67.1585216160857.1585216160857.1585216160857.1&amp;__hssc=216392137.1.1585216160858&amp;__hsfp=170476807) is one of the storage engines provided by OpenEBS besides [Jiva](https://docs.openebs.io/docs/next/jiva.html?__hstc=216392137.6a5433d986ca5a9bb31cbcea3a03df67.1585216160857.1585216160857.1585216160857.1&amp;__hssc=216392137.1.1585216160858&amp;__hsfp=170476807) and [Local PV.](https://docs.openebs.io/docs/next/localpv.html?__hstc=216392137.6a5433d986ca5a9bb31cbcea3a03df67.1585216160857.1585216160857.1585216160857.1&amp;__hssc=216392137.1.1585216160858&amp;__hsfp=170476807).
OpenEBS is a CNCF project delivering persistent block storage to the workloads deployed in Kubernetes. [cStor](/docs/concepts/cstor?__hstc=216392137.6a5433d986ca5a9bb31cbcea3a03df67.1585216160857.1585216160857.1585216160857.1&amp;__hssc=216392137.1.1585216160858&amp;__hsfp=170476807) is one of the storage engines provided by OpenEBS besides [Jiva](/docs/concepts/jiva?__hstc=216392137.6a5433d986ca5a9bb31cbcea3a03df67.1585216160857.1585216160857.1585216160857.1&amp;__hssc=216392137.1.1585216160858&amp;__hsfp=170476807) and [Local PV](/docs/concepts/localpv?__hstc=216392137.6a5433d986ca5a9bb31cbcea3a03df67.1585216160857.1585216160857.1585216160857.1&amp;__hssc=216392137.1.1585216160858&amp;__hsfp=170476807).

cStor was not supported in K3OS till k3os-v0.8.0 due to this [issue](https://github.com/rancher/k3os/issues/151). This issue has been addressed in v0.9.0 by adding udev support.

Expand Down
2 changes: 1 addition & 1 deletion website/src/blogs/ha-vs-dr-and-ha-c2-b2-for-your-db.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Whichever method you use, keep in mind that the granularity of control that cont

I hope this blog is of use to those of you wrestling with ways to ensure resilience while running real (stateful) workloads on Kubernetes. The good news is that OpenEBS and other open source projects in and around Kubernetes are quickly accumulating thousands or tens of thousands of production hours and there are many experts that frequent such channels and are often ready and willing to help.

Some of this experience informs our docs in the OpenEBS community, including common patterns for workloads such as Minio, MySql, Cassandra, Elastic and many others: [https://docs.openebs.io/docs/next/mysql.html](https://docs.openebs.io/docs/next/mysql.html)
Some of this experience informs our docs in the OpenEBS community, including common patterns for workloads such as Minio, MySql, Cassandra, Elastic and many others: [https://openebs.io/docs/stateful-applications/mysql](/docs/stateful-applications/mysql)

As mentioned, you can also see these and other workloads on display as each commit to master for OpenEBS is tested against them. You can even choose to inject chaos into the testing of these workloads on OpenEBS as it is developed and matured: [https://openebs.ci/workload-dashboard](https://openebs.ci/workload-dashboard)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ That guide shows you how to cluster multiple instances of the application behind

Deploying Jira on Kubernetes using OpenEBS is as simple as installing OpenEBS on Kubernetes, define a storage pool, define a storage class, define your persistent volume claim, and deploy the Jira container. That’s it… and if you are already using OpenEBS it is even simpler. Now, as for the specifics of how to do those things, see this guide:

[Jira - OpenEBS docs](https://docs.openebs.io/docs/next/jira.html?__hstc=216392137.fb75a0ac1e54cb037dfbafd0edf1ad3f.1579868085240.1579868085240.1579868085240.1&amp;__hssc=216392137.1.1579868085240&amp;__hsfp=3765904294)
[Jira - OpenEBS docs](/docs/stateful-applications/jira?__hstc=216392137.fb75a0ac1e54cb037dfbafd0edf1ad3f.1579868085240.1579868085240.1579868085240.1&amp;__hssc=216392137.1.1579868085240&amp;__hsfp=3765904294)

Once you have Jira deployed on your cluster, the easiest way to see your storage resources is through MayaOnline (hopefully you connected to MayaOnline while following the guide, if not the [instructions are here](https://docs.openebs.io/docs/next/mayaonline.html?__hstc=216392137.fb75a0ac1e54cb037dfbafd0edf1ad3f.1579868085240.1579868085240.1579868085240.1&amp;__hssc=216392137.1.1579868085240&amp;__hsfp=3765904294). Here is an example of a Jira deployment as visualized through the MayaOnline topology pane:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ OpenEBS provides different types of Local Volumes that can be used to provide lo

I hope this overview of LocalPV options and OpenEBS Local has been useful. I plan to follow this with further blogs that get into the details of each flavor of the OpenEBS Local PV.

In the meantime, you can get started easily with [OpenEBS Local PV](https://docs.openebs.io/docs/next/overview.html), and the community is always available on the Kubernetes Slack #openebs channel.
In the meantime, you can get started easily with [OpenEBS Local PV](/docs), and the community is always available on the Kubernetes Slack #openebs channel.

Or read more on what our OpenEBS users and partners have to say about Local PV. From our friends at 2nd Quadrant (now part of EDB): [Local Persistent Volumes and PostgreSQL usage in Kubernetes](https://www.2ndquadrant.com/en/blog/local-persistent-volumes-and-postgresql-usage-in-kubernetes/)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This blog will focus on the steps to be followed to create the OpenEBS PV on Goo

## PRE-REQUISITES

- 3-Node GKE cluster with the OpenEBS Operator installed (Refer: [https://docs.openebs.io/docs/cloudsolutions.html](https://docs.openebs.io/docs/cloudsolutions.html))
- 3-Node GKE cluster with the OpenEBS Operator installed.
- 3-Google Persistent Disks, one attached to each node of the cluster.This can be done using the **_gcloud compute disks create_** & **_gcloud compute instances attach-disk_** commands (Refer for console steps: [https://cloud.google.com/compute/docs/disks/add-persistent-disk#create_disk](https://cloud.google.com/compute/docs/disks/add-persistent-disk#create_disk))

### STEP-1: Format the GPDs & Mount into desired path
Expand Down
Loading