Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update links of blogs and docs within blogs #257

Merged
merged 9 commits into from
Sep 2, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Kubernetes is nearly ready as a layer enabling hyper convergence, as the compute

When it comes to storage, however, there are a few pieces that are missing. Once added to Kubernetes, these pieces will unlock a number of benefits to users of Kubernetes including better resource utilization, reduction of noisy neighbor phenomena, simpler management, isolation at the node level thereby reducing the potential blast radius of failures, and, perhaps most importantly, further ownership and management of relevant infrastructure per workload and per DevOps team.

Storage management capabilities in Kubernetes have improved in the last couple of years. For example, there is now clarity around how to connect a stateful application to persistent storage. The constructs of persistent volume claim (PVC), persistent volume (PV), and storage class (SC) along with dynamic provisioners from vendors have clarified how to connect a pod to a storage volume. With these Kubernetes constructs, a large ecosystem of legacy storage found its way to be connected to application pods. Many vendors and open source projects are so excited about this connectivity to cloud native environments that they have taken to calling their traditional storage “[cloud native](https://blog.openebs.io/cloud-native-storage-vs-marketers-doing-cloud-washing-c936089c2b58)”.
Storage management capabilities in Kubernetes have improved in the last couple of years. For example, there is now clarity around how to connect a stateful application to persistent storage. The constructs of persistent volume claim (PVC), persistent volume (PV), and storage class (SC) along with dynamic provisioners from vendors have clarified how to connect a pod to a storage volume. With these Kubernetes constructs, a large ecosystem of legacy storage found its way to be connected to application pods. Many vendors and open source projects are so excited about this connectivity to cloud native environments that they have taken to calling their traditional storage “[cloud native](/blog/cloud-native-storage-vs-marketers-doing-cloud-washing)”.

In order to explain why new tools and constructs are needed to improve the management of storage media, let’s start by reviewing pod connectivity. Shown below is a pod connected to external storage through a dynamic provisioner interface.

Expand Down

This file was deleted.

2 changes: 1 addition & 1 deletion website/src/blogs/ansible-openebs-the-whys-and-hows.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ One of the biggest IT trends over the last few years has been managing infrastru

But, how does the above address our question?

**Answer**: A major portion of the test duration of infrastructure-based software, such as storage software involves “manipulation” of infrastructure. Setting up bare-metal boxes, virtual machines, or containers, installing packages, executing various commands that control & alter system state, monitoring for specific behavior are key aspects of this process. Consider the need to run the above as batch processes and perform parallel execution on multiple nodes — and the inevitability of a workflow orchestrator dawns upon you. Especially so when you are testing a solution like OpenEBS that is designed to provide storage for DevOps use cases (read more about this [here](https://blog.openebs.io/storage-infrastructure-as-code-using-openebs-6a76b37aebe6))
**Answer**: A major portion of the test duration of infrastructure-based software, such as storage software involves “manipulation” of infrastructure. Setting up bare-metal boxes, virtual machines, or containers, installing packages, executing various commands that control & alter system state, monitoring for specific behavior are key aspects of this process. Consider the need to run the above as batch processes and perform parallel execution on multiple nodes — and the inevitability of a workflow orchestrator dawns upon you. Especially so when you are testing a solution like OpenEBS that is designed to provide storage for DevOps use cases (read more about this [here](/blog/storage-infrastructure-as-code-using-openebs))

Is not an approach (and the tool) soaked in “**devops-ness**” a pre-requisite to test the storage solution specifically designed for DevOps use cases 🙂 ?

Expand Down
4 changes: 2 additions & 2 deletions website/src/blogs/atlassian-jira-deployment-on-openebs.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,11 +13,11 @@ excerpt: Learn how to deploy Atlassian Jira on OpenEBS in this short post.

#### Install OpenEBS

If OpenEBS is not installed in your K8s cluster, this can be done from [here](https://docs.openebs.io/docs/next/installation.html). If OpenEBS is already installed, go to the next step.
If OpenEBS is not installed in your K8s cluster, this can be done from [here](/docs/user-guides/installation). If OpenEBS is already installed, go to the next step.

#### Configure cStor Pool

If cStor Pool is not configured in your OpenEBS cluster, this can be done from [here](https://docs.openebs.io/docs/next/ugcstor.html#creating-cStor-storage-pools). Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided:
If cStor Pool is not configured in your OpenEBS cluster, this can be done from [here](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools). Sample YAML named **openebs-config.yaml** for configuring cStor Pool is provided:

```
#Use the following YAMLs to create a cStor Storage Pool.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ OpenEBS uses a minimum of three replicas to run OpenEBS clusters with high avail
### Quick Steps to Set Up OpenEBS on GKE

- Set up a three-node GKE cluster with local disks by enabling the Cluster Autoscaler feature.
- Install OpenEBS on Kubernetes Nodes. This should be simple, and a couple of methods are discussed at the beginning of our docs, using either a Helm Chart or directly from Kubectl. More details are mentioned in the [OpenEBS documentation](https://docs.openebs.io/).
- Install OpenEBS on Kubernetes Nodes. This should be simple, and a couple of methods are discussed at the beginning of our docs, using either a Helm Chart or directly from Kubectl. More details are mentioned in the [OpenEBS documentation](/docs).
- Use OpenEBS Storage Classes to create Persistent Volumes for your stateful applications.

## Detailed Explanation of OpenEBS 0.7 Cluster Deployment on GKE across AZs and Rebuilding of PVs.
Expand Down
Loading