Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(docs): remove completed items from roadmap #3519

Merged
merged 4 commits into from
Feb 23, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,11 @@ Some key aspects that make OpenEBS different compared to other traditional stora
- OpenEBS supports a range of storage engines so that developers can deploy the storage technology appropriate to their application design objectives. Distributed applications like Cassandra can use the LocalPV engine for lowest latency writes. Monolithic applications like MySQL and PostgreSQL can use the ZFS engine (cStor) for resilience. Streaming applications like Kafka can use the NVMe engine [Mayastor](https://github.com/openebs/Mayastor) for best performance in edge environments. Across engine types, OpenEBS provides a consistent framework for high availability, snapshots, clones and manageability.

OpenEBS itself is deployed as just another container on your host and enables storage services that can be designated on a per pod, application, cluster or container level, including:
- Automate the management of storage attached to the Kubernetes worker nodes and allow the storage to be used for Dynamically provisioning OpenEBS PVs or Local PVs.
- Automate the management of storage attached to the Kubernetes worker nodes and allow the storage to be used for Dynamically provisioning OpenEBS Replicated or Local PVs.
- Data persistence across nodes, dramatically reducing time spent rebuilding Cassandra rings for example.
- Synchronization of data across availability zones and cloud providers improving availability and decreasing attach/detach times for example.
- Synchronous replication of volume data across availability zones improving availability and decreasing attach/detach times for example.
- A common layer so whether you are running on AKS, or your bare metal, or GKE, or AWS - your wiring and developer experience for storage services is as similar as possible.
- Management of tiering to and from S3 and other targets.
- Backup and Restore of volume data to and from S3 and other targets.

An added advantage of being a completely Kubernetes native solution is that administrators and developers can interact and manage OpenEBS using all the wonderful tooling that is available for Kubernetes like kubectl, Helm, Prometheus, Grafana, Weave Scope, etc.

Expand All @@ -62,13 +62,13 @@ helm repo update
helm install --namespace openebs --name openebs stable/openebs
```

You could also follow our [QuickStart Guide](https://docs.openebs.io/docs/overview.html).
You could also follow our [QuickStart Guide](https://openebs.io/docs).

OpenEBS can be deployed on any Kubernetes cluster - either in the cloud, on-premise or developer laptop (minikube). Note that there are no changes to the underlying kernel that are required as OpenEBS operates in userspace. Please follow our [OpenEBS Setup](https://docs.openebs.io/docs/overview.html) documentation. Also, we have a Vagrant environment available that includes a sample Kubernetes deployment and synthetic load that you can use to simulate the performance of OpenEBS. You may also find interesting the related project called [Litmus](https://litmuschaos.io) which helps with chaos engineering for stateful workloads on Kubernetes.
OpenEBS can be deployed on any Kubernetes cluster - either in the cloud, on-premise or developer laptop (minikube). Note that there are no changes to the underlying kernel that are required as OpenEBS operates in userspace. Please follow our [OpenEBS Setup](https://openebs.io/docs/user-guides/quickstart) documentation.

## Status

OpenEBS is one of the most widely used and tested Kubernetes storage infrastructures in the industry. A CNCF Sandbox project since May 2019, OpenEBS is the first and only storage system to provide a consistent set of software-defined storage capabilities on multiple backends (local, nfs, zfs, nvme) across both on-premise and cloud systems, and was the first to open source its own Chaos Engineering Framework for Stateful Workloads, the [Litmus Project](https://litmuschaos.io), which the community relies on to automatically readiness assess the monthly cadence of OpenEBS versions. Enterprise customers have been using OpenEBS in production since 2018 and the project supports 2.5M+ docker pulls a week.
OpenEBS is one of the most widely used and tested Kubernetes storage infrastructures in the industry. A CNCF Sandbox project since May 2019, OpenEBS is the first and only storage system to provide a consistent set of software-defined storage capabilities on multiple backends (local, nfs, zfs, nvme) across both on-premise and cloud systems, and was the first to open source its own Chaos Engineering Framework for Stateful Workloads, the [Litmus Project](https://litmuschaos.io), which the community relies on to automatically readiness assess the monthly cadence of OpenEBS versions. Enterprise customers have been using OpenEBS in production since 2018.

The status of various storage engines that power the OpenEBS Persistent Volumes are provided below. The key difference between the statuses are summarized below:
- **alpha:** The API may change in incompatible ways in a later software release without notice, recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
Expand All @@ -79,11 +79,11 @@ The status of various storage engines that power the OpenEBS Persistent Volumes
| Storage Engine | Status | Details |
|---|---|---|
| Jiva | stable | Best suited for running Replicated Block Storage on nodes that make use of ephemeral storage on the Kubernetes worker nodes |
| cStor | beta | A preferred option for running on nodes that have Block Devices. Recommended option if Snapshot and Clones are required |
| Local Volumes | beta | Best suited for Distributed Application that need low latency storage - direct-attached storage from the Kubernetes nodes. |
| cStor | stable | A preferred option for running on nodes that have Block Devices. Recommended option if Snapshot and Clones are required |
| Local Volumes | stable | Best suited for Distributed Application that need low latency storage - direct-attached storage from the Kubernetes nodes. |
| Mayastor | beta | A new storage engine that operates at the efficiency of Local Storage but also offers storage services like Replication. Development is underway to support Snapshots and Clones. |

For more details, please refer to [OpenEBS Documentation](https://docs.openebs.io/docs/next/overview.html).
For more details, please refer to [OpenEBS Documentation](https://openebs.io/docs/).

## Contributing

Expand Down
6 changes: 3 additions & 3 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Release Process

OpenEBS follows a monthly release cadence. The process is as follows:
OpenEBS follows a quarterly release cadence. The process is as follows:

The scope of the release is determined by:
- contributor availability,
Expand All @@ -11,6 +11,6 @@ The scope of the release is determined by:
1. At the start of the release cycle, one of the contributors takes on the role of release manager and works with the OpenEBS Maintainers to co-ordinate the release activities.
1. Contributors sync-up over [community calls and slack](./community/) to close on the release tasks. Release manager runs the community calls for a given release. In the community call, the risks are identified and mitigated by seeking additional help or by pushing the task to next release.
1. The various release management tasks are explained in the [release process document](./contribute/process/release-management.md).
1. OpenEBS release is made via GitHub. Once all the components are released, [Change Summary](https://github.com/openebs/openebs/wiki) is updated and [openebs/openebs](https://github.com/openebs/openebs/releases) repo is tagged with the release.
1. OpenEBS release is made via GitHub. Once all the components are released, Change Summary is published along with [openebs/openebs](https://github.com/openebs/openebs/releases) release tag.
1. OpenEBS release is announced on [all Community reach out channels](./community/).
1. The release tracker GitHub project is closed
1. The release tracker GitHub project is closed.
90 changes: 27 additions & 63 deletions ROADMAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ OpenEBS follows a lean project management approach by splitting the development

## Current

These are some of the backlogs that are prioritized and planned to be completed within the next major release (e.g. OpenEBS 3.0). While the following are planned items, higher priority is given to usability and stability issues reported by the community. The completion of these items also depends on the availability of contributors.
These are some of the backlogs that are prioritized and planned to be completed within the next major release (e.g. OpenEBS 4.0). While the following are planned items, higher priority is given to usability and stability issues reported by the community. The completion of these items also depends on the availability of contributors.

Note: OpenEBS follows a monthly release cadence with a new minor release on the 15th of every month. For the most current plan and status check out the [release project trackers](https://github.com/orgs/openebs/projects) or the component specific trackers listed below. This document is reviewed and updated by the maintainers after each release.
Note: OpenEBS follows a quarterly release cadence with a new minor release around the end of each quarter. For the most current plan and status check out the [release project trackers](https://github.com/orgs/openebs/projects). This document is reviewed and updated by the maintainers after each major release.


### Dynamic Local PVs
Expand All @@ -21,42 +21,28 @@ Note: OpenEBS follows a monthly release cadence with a new minor release on the
- https://github.com/openebs/device-localpv
- https://github.com/openebs/node-disk-manager
- Backlogs
- [Done] Support for incremental and full Backups for ZFS Local PV
- [Done] Split the Local Provisioner for hostpath and device from openebs/maya into its own repository
- [Done] Support for specifying node affinity on Local Volumes using custom labels
- [Done] Support for Dynamic Provisioning of Local PV backed by LVM
- [Done] Support for Dynamic Provisioning of Local PV backed by Device Partitions
- [Done] Capacity based scheduling for ZFS,LVM and Device Local PV.
- [Done] Support for setting IOPS limits for the LVM Local PV
- [In-progress] Set quota on the hostpath volumes created on XFS filesystem
- [In-progress] Expose prometheus metrics
- [In-progress] Add additional integration and end-to-end tests
- Shared VG for LVM Local PV.
- Data populator for moving Local PVs across nodes.

### Mayastor
- Source repositories
- https://github.com/openebs/Mayastor
- https://github.com/openebs/moac
- https://github.com/openebs/moac (deprecated)
- https://github.com/openebs/mayastor-control-plane
- Backlogs
- [Done] User applications can continue to access volumes when the nexus hosting them fails (e.g. Mayastor container crashes or is otherwise rescheduled, or its host node is lost or disconnected)
- [Done] It should be possible for Moac (and all other significant control plane components) to be rescheduled within a cluster
- [In-progress] Refactoring for better control plane and stability fixes
- [In-progress] Add additional integration and end-to-end tests
- Mayastor Replica placement should be topology aware
- Mayastor should expose metrics which meet the needs of the SRE persona, to trend review throughput, latency, capacity utilisation and errors
- Multi-arch builds for all Mayastor components
- Mayastor Replica placement should be topology aware for statefulsets zonal (or HA) distribution.
- Mayastor should expose metrics that meet the needs of the SRE persona, to trend review throughput, latency, capacity utilization, and errors
- Support for VolumeSnapshot
- API refactoring exposed via gRPC
- Allow a new replica to be created within the same Mayastor Pool as the failed replica it replaces

### Jiva
- Source repositories
- https://github.com/openebs/jiva
- https://github.com/openebs/jiva-operator
- Backlogs
- [Done] Enhance Jiva Operator functionality to reduce manual steps around launching new replicas when node is completely removed from the cluster
- [Done] Add additional integration tests to Jiva CSI Driver to move towards beta
- [Done] Consolidate the CSI driver and Jiva control plane into single repo
- [In-progress] Automate the migration of volumes from out-of-tree provisioners to CSI Driver
- [In-progress] Add additional integration and end-to-end tests
- Deprecate the v1alpha1 CRDs in favor of the v1 CRDs introduced from 3.1


### cStor
- Source repositories
Expand All @@ -69,24 +55,13 @@ Note: OpenEBS follows a monthly release cadence with a new minor release on the
- https://github.com/openebs/api
- https://github.com/openebs/upgrade
- Backlogs
- [Done] Move the Backup/Restore related API to v1
- [Done] Automate the migration of volumes from out-of-tree provisioner to CSI Driver
- [In-progress] Additional integration and e2e tests to help move cStor towards stable
- Upstream uZFS changes and start using them instead of a local fork

### NDM
- Source repositories
- https://github.com/openebs/node-disk-manager
- Backlogs
- [Done] Enhance the discovery probes to identify virtual storage (without WWN) moving across nodes
- [Done] Add gRPC API to list and re-scan block device
- [Done] Enhance the discovery probes to detect if the device already has device mapper, zfs and so forth
- [Done] Scan for device media errors and report them as prometheus metrics via ndm-exporter
- [Done] Label the block devices so that they can be reserved for use by different StorageClasses
- [In-progress] Auto-detecting capacity and mountpoint changes and updating the block device CR
- [In-progress] Additional integration and e2e tests
- Support for using a custom node label to claim devices (instead of default kubernetes.io/hostname)
- Support Bulk BDC requests to claim multiple block devices that satisfy affinity or anti-affinity rules of applications. Example: two block devices from same node or two block devices from different nodes.
- Support for device configuration tasks like partitioning, mounting or unmounting devices by adding new services via NDM gRPC API layer.
- None


### Others
Expand All @@ -95,54 +70,42 @@ Note: OpenEBS follows a monthly release cadence with a new minor release on the
- https://github.com/openebs/openebsctl
- https://github.com/openebs/monitoring
- https://github.com/openebs/website
- https://github.com/openebs/openebs-docs
- https://github.com/openebs/maya
- https://github.com/openebs/m-exporter
- https://github.com/openebs/openebs-k8s-provisioner
- https://github.com/openebs/dynamic-nfs-provisioner
- https://github.com/openebs/openebs-k8s-provisioner (deprecated)
- https://github.com/openebs/openebs-docs (deprecated)
- https://github.com/openebs/maya (deprecated)
- Backlogs
- [Done] Move towards GitHub actions based builds from Travis for all the repositories.
- [Done] Enable multi-arch builds.
- [Done] Add OpenAPI validations for the OpenEBS CRDs
- [Done] Building additional Grafana Dashboards for OpenEBS Components, Block Devices, Pools and Volumes, that can be used to monitor SLOs
- [Done] Dashboard/UI for monitoring and managing cStor pools and volumes
- [Done] Split the provisioners and/or operators from the mono-repos [openebs/maya](https://github.com/openebs/maya) and [openebs/external-storage](https://github.com/openebs/external-storage) into individual repos
- [Done] Simplify the setup of NFS based Read-Write-Many volumes using OpenEBS RWO block volumes
- [Done] Add the existing functionality available in [mayactl](https://github.com/openebs/maya/tree/master/cmd/mayactl) for volume management to openebsctl
- [In-progress] Provide component level helm charts, that can then be used as dependent charts by the openebs chat
- [In-progress] Refactor the website and user documentation to be built as a single website using Hugo, similar to other CNCF projects
- [In-progress] Add support for Kyverno, as a replacement for PSP
- [In-progress] Integrate the content sites like - website and documentation into a single repo.

- Enhancements to OpenEBS CLI (openebsctl) for better troubleshooting OpenEBS components and fixing the errors
- User-friendly installation & configuration command-line tool (analogy to linkerd CLI for linkerd)
- Migrate the CI to CNCF infrastructure from vendor infrastructure

## Near Term

Typically the items under this category fall under next major release (after the current. e.g 4.0). At a high level, the focus is towards moving the beta engines towards stable by adding more automated e2e tests and updating the corresponding user and contributor documents. To name a few backlogs (not in any particular order) on the near-term radar, where we are looking for additional help:


- Support for Mayastor Volume resize
- Support for pluggable storage backend for Mayastor (example: replace blobstore with lvm)
- Support for specifying multiple hostpaths to be used with Local PV hostpath
- Ability to migrate the Local PVs to other nodes in the cluster to handle node upgrades
- Update user documentation with reference stacks of running various workloads using OpenEBS volumes
- Auto provisioning of block devices (on the external storage systems) that can be used with OpenEBS storage engines
- Enhancements to OpenEBS CLI (openebsctl) for better troubleshooting OpenEBS components and fixing the errors
- Setup E2e pipelines for ARM Clusters
- Conform with the new enhancements coming in the newer Kubernetes releases around Capacity based provisioning, CSI, and so forth
- Automate the workflows around handling scenarios like complete cluster failures that currently require some manual steps
- Custom Kubernetes storage schedulers to address auto-rebalancing of the data placed on the nodes to help with scale up/down of Kubernetes nodes
- User-friendly installation & configuration command-line tool (analogy to linkerd CLI for linkerd)
- Allow Mayastor Pools to incorporate more than one capacity contributing disk device
- Failed replicas should be garbage collected (return capacity to Mayastor Pool)
- Allow a new replica to be created within the same Mayastor Pool as the failed replica it replaces
- Auto-scaling up and down of cStor pools as the new nodes are added and removed
- Auto-upgrade of cStor Pools and Volumes when user upgrades control plane
- Asynchronous or DR replica for cStor and Mayastor volumes
- Support for restoring a volume (in-place) for supporting blue/green stateful deployments
- Upstream uZFS changes and start using them instead of a local fork

- Multi-arch builds for all Mayastor components
- Partial rebuild for the Mayastor replicas (similar to zfs resilvering)
- Support Bulk BDC requests to claim multiple block devices that satisfy affinity or anti-affinity rules of applications. Example: two block devices from same node or two block devices from different nodes.
- Support for device configuration tasks like partitioning, mounting or unmounting devices by adding new services via NDM gRPC API layer.

## Future

As the name suggests this bucket contains items that are planned for future. Sometimes the items are related to adapting to the changes coming in the Kubernetes repo or other related projects. Github milestone called [future backlog](https://github.com/openebs/openebs/milestone/11) is used to track these requests .
As the name suggests this bucket contains items that are planned for future. Sometimes the items are related to adapting to the changes coming in the Kubernetes repo or other related projects. Github milestone called [future backlog](https://github.com/openebs/openebs/milestone/11) is used to track these requests.

# Getting involved with Contributions

Expand All @@ -151,3 +114,4 @@ We are always looking for more contributions. If you see anything above that you
- [Joining OpenEBS contributor community on Kubernetes Slack](https://kubernetes.slack.com)
- Already signed up? Head to our discussions at [#openebs-dev](https://kubernetes.slack.com/messages/openebs-dev/)
- [Joining our Community meetings](https://github.com/openebs/openebs/tree/master/community)

Loading