Skip to content
This repository has been archived by the owner on Apr 17, 2024. It is now read-only.

Commit

Permalink
feat: replace or remove references to KSD (#30)
Browse files Browse the repository at this point in the history
* feat: replace or remove references to KSD

* fix: remove last reference to KSD

* fix: broken link
  • Loading branch information
dave-at-koor authored Dec 1, 2023
1 parent ab39e71 commit 8d5c8bd
Show file tree
Hide file tree
Showing 25 changed files with 287 additions and 1,013 deletions.
40 changes: 10 additions & 30 deletions docs/.overrides/home.html
Original file line number Diff line number Diff line change
Expand Up @@ -25,47 +25,27 @@
<div class="md-grid md-typeset">
<div class="mdx-hero">
<div class="mdx-hero__image">
<!--<img
src="assets/images/illustration.png"
alt=""
width="1659"
height="1200"
draggable="false"
/>-->
</div>
<div class="mdx-hero__content">
<h1>Koor Knowledge Base</h1>
<h2>What you need to know about KSD, Rook, and Ceph</h2>
<p>Getting the most out of Rook and Ceph</p>
<a
href="{{ 'support/free-trial/' | url }}"
title="KSD Free Trial"
class="md-button md-button--primary">
KSD Free Trial
</a>
<h2>Getting started and hidden gems</h2>
<p>Rook and Ceph are awesome. Sometimes using them gets tricky. The information in this knowledge base will help you through it. Plus, if a topic is not covered here, you can contact us for free support.</p>
<a
href="{{ 'knowledge/koor/' | url }}"
title="Find Answers"
class="md-button">
class="md-button md-button--primary">
Find Answers
</a>
<a
href="{{ 'support/help-desk' | url }}"
title="Help Desk"
class="md-button">
Help Desk
</a>
<a
href="{{ '/support/office-hours' | url }}"
title="KSD Free Trial"
class="md-button">
Office Hours
href="{{ 'support/help-desk' | url }}"
title="Help Desk"
class="md-button">
Help Desk
</a>
<a
href="{{ 'https://about.koor.tech/' | url }}"
title="Koor Main Site"
href="{{ 'https://about.koor.tech/product' | url }}"
title="Koor Free Trial"
class="md-button">
Koor Homepage
Koor Free Trial
</a>
</div>
</div>
Expand Down
2 changes: 1 addition & 1 deletion docs/getting-started/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ nav:
- solutions-for-storage.md
- introduction-to-ceph.md
- introduction-to-rook.md
- ksd-koor-storage-distribution.md
- data-control-center-intro.md
33 changes: 19 additions & 14 deletions docs/getting-started/containers-and-persistent-storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Each time a container is stopped or restarted, it starts with a clean state, eli

## Persistent Storage for Data:

Many applications, such as databases, content management systems, and file servers, rely on persistent storage to maintain important data.
Many applications, such as databases, content management systems, and file servers, rely on persistent storage to maintain important data.
Persistent storage ensures that data remains intact and accessible even when containers are restarted, rescheduled, or scaled.

Persistent storage allows applications to:
Expand All @@ -20,34 +20,37 @@ Persistent storage allows applications to:

Without persistent storage, applications would lose important data and face challenges in maintaining consistency and reliability.

Kubernetes offers the concepts of PersistentVolumeClaim (PVC) and PersistentVolume (PV) to fulfill this requirement.
Whether you're running Kubernetes on bare metal or in the cloud, you can leverage various storage options for PVs.
For example, on bare metal, you can utilize local storage devices such as hard drives or solid-state drives (SSDs) directly as PVs.
Kubernetes offers the concepts of PersistentVolumeClaim (PVC) and PersistentVolume (PV) to fulfill this requirement.
Whether you're running Kubernetes on bare metal or in the cloud, you can leverage various storage options for PVs.
For example, on bare metal, you can utilize local storage devices such as hard drives or solid-state drives (SSDs) directly as PVs.
This enables your Kubernetes applications to have access to durable and scalable storage directly on the underlying hardware without relying on a cloud-specific storage service like AWS Elastic Block Store (EBS).

### Using PVC and PV on Kubernetes

- Create a PersistentVolume: Define a PersistentVolume (PV) manifest `pv.yaml` that describes the storage volume you want to make available to your applications. Here's an example PV manifest using a local storage volume:

```yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/storage
```
- Create PV resource
```console
$ kubectl apply -f pv.yaml
```

- Create a PersistentVolumeClaim: Define a PersistentVolumeClaim (PVC) manifest `pvc.yaml` that requests storage resources from the available PVs. Here's an example PVC manifest:

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
Expand All @@ -60,11 +63,15 @@ spec:
requests:
storage: 5Gi
```
- Create PVC resource
```console
$ kubectl apply -f pvc.yaml
```

- And finally use this on your pods

```yaml
apiVersion: v1
kind: Pod
Expand All @@ -89,11 +96,9 @@ Managing storage in containerized environments introduces complexities due to th
platforms like Kubernetes.
- Data Persistence: Ensuring that data remains persistent and accessible across container restarts, scaling events, and node failures.
- Storage Provisioning: Provisioning of storage resources to containers and pods, considering capacity, performance, and availability.
- Storage Provisioning: Provisioning of storage resources to containers and pods, considering capacity, performance, and availability.
- Data Replication and Synchronization: Implementing mechanisms to replicate and synchronize data across multiple instances or nodes for high availability and fault tolerance.
- Dynamic Volume Provisioning: Dynamically provisioning and attaching storage volumes to containers as per demand, without manual intervention.
- Data Backup and Recovery: Implementing backup and recovery strategies to protect critical data and enable disaster recovery when needed.
Those solutions could be a bit hard to implement or manage overall when you are dealing with a lot of data I mean Terabytes of data,
fortunately we have a lof ot options to deal with it, let's see them on detail in the next section [Storage Solutions](solutions-for-storage.md), fo course you can skip them and go to
[KSD Koor Storage Distribution](ksd-koor-storage-distribution.md) directly to know the benefits over other options.
Those solutions could be a bit hard to implement or manage overall when you are dealing with a lot of data I mean Terabytes of data, fortunately we have a lots of options to deal with it. Let's see them on detail in the next section [Storage Solutions](solutions-for-storage.md).
5 changes: 5 additions & 0 deletions docs/getting-started/data-control-center-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Koor Data Control Center

This is a new guide that helps you set up and use the Data Control Center in your Kubernetes cluster.

This is a work in progress. Stay tuned...
16 changes: 6 additions & 10 deletions docs/getting-started/index.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: Getting started
---
Welcome to the Getting Started Guide!, we are excited that you have chosen Koor Storage Distribution (KSD) as
your Kubernetes storage solution.

We understand that there may be many unfamiliar terms for you,
or perhaps you simply want to ensure that KSD is the solution you've been searching for.
In this guide, we will walk you through the step-by-step process of using KSD, starting from scratch.
Welcome to the Getting Started Guide! We are excited that you have chosen the Koor Data Control Center to help manage Rook Ceph storage in Kubernetes.

We understand that there may be many unfamiliar terms for you.
Perhaps you simply want to ensure that the Data Control Center is the solution you've been searching for.
In this guide, we will walk you through the step-by-step process of using Koor Data Control Center.

Here's a summary of the guide's index:

Expand All @@ -20,10 +20,6 @@ Here's a summary of the guide's index:

- **[Installing and Using Rook](introduction-to-rook.md)**: In this section, we will familiarize you with the fundamental terms related to Rook. Understanding these terms will enable you to effectively utilize Rook within your storage infrastructure

- **[KSD Koor Storage Distribution](ksd-koor-storage-distribution.md)**: Koor has tools that improve the Rook-Ceph experience, we developed features based on our customers and the communities pain points, to make storage easier to run and consume than ever.

We hope this guide helps you successfully utilize Koor Storage Distribution.
If you have any further questions or need assistance along the way, feel free to reach out to our [support team](/support/).
- **[Koor Data Control Center Intro](data-control-center-intro.md)**: This is a guide to help you get started.

Let's start with a quick recap of Kubernetes [Introduction to Kubernetes](kubernetes.md).
Enjoy your journey with KSD!
52 changes: 25 additions & 27 deletions docs/getting-started/introduction-to-ceph.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,56 +8,54 @@ Before to start I would like to show promotional of ceph which resumes pretty go

[Ceph](https://ceph.io/en/) is an open-source storage platform that provides scalable and reliable storage for different types of data. It can store unstructured data as objects and also supports block storage and a shared file system.


Rook uses Ceph as storage solution and therefore Koor uses Ceph as well, some the key features that Ceph has are:

* **Distributed Architecture**: Ceph can scale horizontally by adding more storage nodes, making it fault-tolerant.
* **Object Storage**: It stores large amounts of unstructured data efficiently, with compatibility for existing applications.
* **Block Storage**: Ceph creates virtual block devices for applications and operating systems, offering features like cloning and snapshots.
* **POSIX File System**: It provides a shared file system experience for multiple clients to access simultaneously.
* **Data Protection**: Ceph offers data replication and erasure coding for fault tolerance and storage efficiency.
* **Auto-tiering**: Data is automatically moved between different storage tiers based on usage patterns.
* **Scalability and Performance**: Ceph scales from a few nodes to thousands, providing high throughput and low-latency access.
* **Community and Ecosystem**: Ceph has a strong open-source community and wide adoption, leading to continuous improvement.
- **Distributed Architecture**: Ceph can scale horizontally by adding more storage nodes, making it fault-tolerant.
- **Object Storage**: It stores large amounts of unstructured data efficiently, with compatibility for existing applications.
- **Block Storage**: Ceph creates virtual block devices for applications and operating systems, offering features like cloning and snapshots.
- **POSIX File System**: It provides a shared file system experience for multiple clients to access simultaneously.
- **Data Protection**: Ceph offers data replication and erasure coding for fault tolerance and storage efficiency.
- **Auto-tiering**: Data is automatically moved between different storage tiers based on usage patterns.
- **Scalability and Performance**: Ceph scales from a few nodes to thousands, providing high throughput and low-latency access.
- **Community and Ecosystem**: Ceph has a strong open-source community and wide adoption, leading to continuous improvement.

#### Understanding Ceph's architecture

Ceph stands out by providing a unified system that combines object, block, and file storage.
Ceph exhibits remarkable scalability, enabling thousands of clients to access and manage petabytes to exabytes of data.
It achieves this by utilizing cost-effective hardware and intelligent daemons within Ceph Nodes.
Ceph stands out by providing a unified system that combines object, block, and file storage.
Ceph exhibits remarkable scalability, enabling thousands of clients to access and manage petabytes to exabytes of data.
It achieves this by utilizing cost-effective hardware and intelligent daemons within Ceph Nodes.
A Ceph Storage Cluster effectively handles a large number of nodes that communicate, dynamically replicating and redistributing data.

![Stack](images/ceph_architecture.webp){ align=center }

#### Ceph components

Ceph provides an infinitely scalable Ceph Storage Cluster based upon [RADOS](https://ceph.io/assets/pdfs/weil-rados-pdsw07.pdf)

A Ceph Storage Cluster consists of multiple types of daemons:

* **Ceph Monitor**: maintains a master copy of the cluster map. A cluster of Ceph monitors ensures high availability should a monitor daemon fail. Storage cluster clients retrieve a copy of the cluster map from the Ceph Monitor.
* **Ceph OSD Daemon**: checks its own state and the state of other OSDs and reports back to monitors
* **Ceph Manager**: acts as an endpoint for monitoring, orchestration, and plug-in modules.
* **Ceph Metadata Server (MDS)**: manages file metadata when CephFS is used to provide file services.
* **Ceph Metadata Server (MDS)**: manages file metadata when CephFS is used to provide file services.

- **Ceph Monitor**: maintains a master copy of the cluster map. A cluster of Ceph monitors ensures high availability should a monitor daemon fail. Storage cluster clients retrieve a copy of the cluster map from the Ceph Monitor.
- **Ceph OSD Daemon**: checks its own state and the state of other OSDs and reports back to monitors
- **Ceph Manager**: acts as an endpoint for monitoring, orchestration, and plug-in modules.
- **Ceph Metadata Server (MDS)**: manages file metadata when CephFS is used to provide file services.
- **Ceph Metadata Server (MDS)**: manages file metadata when CephFS is used to provide file services.

![Ceph OSD Daemons](images/ceph_daemons.webp){ align=center }

#### CRUSH Algorithm
Storage cluster clients and each Ceph OSD Daemon use the CRUSH (Controlled Replication Under Scalable Hashing) algorithm to efficiently compute information about data location, instead of having to depend on a central lookup table.

CRUSH distributes data evenly across available object storage devices in what is often described as a pseudo-random manner.
Distribution is controlled by a hierarchical cluster map called a [CRUSH map](https://docs.ceph.com/en/latest/rados/operations/crush-map/). The map, which can be customized by the storage administrator, informs the cluster about the layout and capacity of nodes in the storage network and
specifies how redundancy should be managed.
Storage cluster clients and each Ceph OSD Daemon use the CRUSH (Controlled Replication Under Scalable Hashing) algorithm to efficiently compute information about data location, instead of having to depend on a central lookup table.

CRUSH distributes data evenly across available object storage devices in what is often described as a pseudo-random manner.
Distribution is controlled by a hierarchical cluster map called a [CRUSH map](https://docs.ceph.com/en/latest/rados/operations/crush-map/). The map, which can be customized by the storage administrator, informs the cluster about the layout and capacity of nodes in the storage network and
specifies how redundancy should be managed.

By allowing cluster nodes to calculate where a data item has been stored,
CRUSH avoids the need to look up data locations in a central directory. CRUSH also allows for nodes to be added or removed,
By allowing cluster nodes to calculate where a data item has been stored,
CRUSH avoids the need to look up data locations in a central directory. CRUSH also allows for nodes to be added or removed,
moving as few objects as possible while still maintaining balance across the new cluster configuration.

![CRUSH Algorithm](images/crush.webp){ align=center }

Ceph is an awesome storage solution designed to address the block, file and object storage needs,
of course to use it in Kubernetes could be a challenging Rook is designed to simplify this process, in the next chapter
[Installing and Using Rook](introduction-to-rook.md) we are going to learn more about this solution.

[KSD Koor Storage Distribution](ksd-koor-storage-distribution.md) has been developed on top of Rook
by installing Koor you can enjoy the Rook features and some tools that improve the Rook-Ceph experience.
Loading

0 comments on commit 8d5c8bd

Please sign in to comment.