Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update links of blogs and docs within blogs #257

Merged
merged 9 commits into from
Sep 2, 2021
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
updating docs links
Signed-off-by: Pallavi-PH <pallaviph02@gmail.com>
  • Loading branch information
Pallavi-PH committed Sep 1, 2021
commit 78b9da879650b446ab31960f85e8f58684bc3ea6
143 changes: 74 additions & 69 deletions website/public/getPosts.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,77 +4,82 @@ const fs = require('fs');
const dirPath = path.join(__dirname, '../src/blogs');
let postList;
const getPosts = () => {
postList = [];
fs.readdir(dirPath, (err, files) => {
if (err) {
console.error(`Failed to load files from the directory${err}`);
}
files.forEach((file, index) => {
const obj = {};
let post;
fs.readFile(`${dirPath}/${file}`, 'utf8', (err, contents) => {
const getMetaDataIndices = (acc, elem, i) => {
if (/^---/.test(elem)) {
acc.push(i);
}
return acc;
};
const parseMetaData = ({ lines, metaDataIndices }) => {
if (metaDataIndices.length) {
const metadata = lines.slice(metaDataIndices[0] + 1, metaDataIndices[1]);
metadata.forEach((line) => {
obj[line.split(': ')[0]] = line.split(': ')[1];
});
return obj;
}
};

const parseContent = ({ lines, metaDataIndices }) => {
if (metaDataIndices.length) {
lines = lines.slice(metaDataIndices[1] + 1, lines.length);
}
return lines.join('\n');
};
const sortAccrodingtoDate = (jsonObj) => {
const dmyOrdD = (a, b) => myDate(b.date) - myDate(a.date);
const myDate = (s) => { const a = s.split(/-|\//); return new Date(a[2], a[1] - 1, a[0]); };
return jsonObj.sort(dmyOrdD);
};
postList = [];
fs.readdir(dirPath, (err, files) => {
if (err) {
console.error('Failed to load files from the directory' + err);
}
files.forEach((file, index) => {
let obj = {};
let post;
fs.readFile(`${dirPath}/${file}`, 'utf8', (err, contents) => {
const getMetaDataIndices = (acc, elem, i) => {
if (/^---/.test(elem)) {
acc.push(i);
}
return acc;
}
const parseMetaData = ({ lines, metaDataIndices }) => {
if (metaDataIndices.length) {
let metadata = lines.slice(metaDataIndices[0] + 1, metaDataIndices[1]);
metadata.forEach((line) => {
obj[line.split(': ')[0]] = line.split(': ')[1];
});
return obj;
}
}

const convertTitleToSlug = (Text) => Text
.toLowerCase()
.replace(/[^\w ]+/g, '')
.replace(/ +/g, '-');
const parseContent = ({ lines, metaDataIndices }) => {
if (metaDataIndices.length) {
lines = lines.slice(metaDataIndices[1] + 1, lines.length);
}
return lines.join('\n');
}
const sortAccrodingtoDate = (jsonObj) => {
const dmyOrdD = (a, b) => { return myDate(b.date) - myDate(a.date); }
const myDate = (s) => { var a = s.split(/-|\//); return new Date(a[2], a[1] - 1, a[0]); }
return jsonObj.sort(dmyOrdD);
}

const convertTitleToSlug = (Text)=>{
return Text
.toLowerCase()
.replace(/[^\w ]+/g,'')
.replace(/ +/g,'-')
;
}

const lines = contents.split('\n');
const metaDataIndices = lines.reduce(getMetaDataIndices, []);
const metadata = parseMetaData({ lines, metaDataIndices });
const content = parseContent({ lines, metaDataIndices });
const lines = contents.split('\n');
const metaDataIndices = lines.reduce(getMetaDataIndices, []);
const metadata = parseMetaData({ lines, metaDataIndices });
const content = parseContent({ lines, metaDataIndices });

if (metadata){
post = {
id: index + 1,
title: metadata.title || 'No title',
author: metadata.author || 'No author',
author_info: metadata.author_info || 'No author information',
date: metadata.date || 'No date available',
tags: metadata.tags.split(',').map(e => e.trim()) || 'No tags available',
excerpt: metadata.excerpt || '',
content: content || 'No content available',
notHasFeatureImage: metadata.not_has_feature_image
};
}


if (metadata) {
post = {
id: index + 1,
title: metadata.title || 'No title',
author: metadata.author || 'No author',
author_info: metadata.author_info || 'No author information',
date: metadata.date || 'No date available',
tags: metadata.tags.split(',').map((e) => e.trim()) || 'No tags available',
excerpt: metadata.excerpt || '',
content: content || 'No content available',
notHasFeatureImage: metadata.not_has_feature_image,
};
}
postList.push(post);
if (postList.length === files.length) {
let sortedJSON = sortAccrodingtoDate(postList);
let sortedJSONWithID = sortedJSON.map(item => ({...item, id: sortedJSON.indexOf(item) + 1, slug: convertTitleToSlug(item.title)}))
let data = JSON.stringify(sortedJSONWithID);
fs.writeFileSync('src/posts.json', data);
}

postList.push(post);
if (postList.length === files.length) {
const sortedJSON = sortAccrodingtoDate(postList);
const sortedJSONWithID = sortedJSON.map((item) => ({ ...item, id: sortedJSON.indexOf(item) + 1, slug: convertTitleToSlug(item.title) }));
const data = JSON.stringify(sortedJSONWithID);
fs.writeFileSync('src/posts.json', data);
}
});
});
});
};
})
})
})
}

getPosts();
getPosts();
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ You can skip this step if using the default cStor Sparse pool.

![Disks detected by NDM, along with sparse disks](/images/blog/ndm-detected-disks.png)

**Step 3c**: Create a storage pool claim using the instructions at [https://docs.openebs.io/docs/next/configurepools.html](https://docs.openebs.io/docs/next/configurepools.html)
**Step 3c**: Create a storage pool claim using the instructions at [https://openebs.io/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools)

Create a `cstor-pool-config.yaml` as mentioned in the docs.

Expand Down
6 changes: 3 additions & 3 deletions website/src/blogs/deploying-openebs-on-suse-caas-platform.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,11 +77,11 @@ Note: — This step is not required if you are using the OpenEBS version 0.9 whi

Configuration of storage pool, storage class and PVC are like any other platform and the steps are outlined in [https://openebs.io/docs](/docs?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294)

Pool Configuration — [https://docs.openebs.io/docs/next/configurepools.html#manual-mode](https://docs.openebs.io/docs/next/configurepools.html?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294#manual-mode)
Pool Configuration — [https://openebs.io/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294#manual-mode)

Storage class — [https://docs.openebs.io/docs/next/configuresc.html#creating-a-new-class](https://docs.openebs.io/docs/next/configuresc.html?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294#creating-a-new-class)
Storage class — [https://openebs.io/docs/deprecated/spc-based-cstor#creating-cStor-storage-class](/docs/deprecated/spc-based-cstor#creating-cStor-storage-class?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294#creating-a-new-class)

Volume — [https://docs.openebs.io/docs/next/provisionvols.html#provision-from-a-disk-pool](https://docs.openebs.io/docs/next/provisionvols.html?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294#provision-from-a-disk-pool)
Volume — [https://openebs.io/docs/deprecated/spc-based-cstor#provisioning-a-cStor-volume](/docs/deprecated/spc-based-cstor#provisioning-a-cStor-volume?__hstc=216392137.a6c0b8ba8416b65c52c0226c0e0b69fd.1579867391229.1579867391229.1579867391229.1&amp;__hssc=216392137.1.1579867391230&amp;__hsfp=3765904294#provision-from-a-disk-pool)

## Conclusion:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ In addition to the benefits of using OpenEBS, there is also value in using MayaO

**Configure cStor Pool**

1. If cStor Pool is not configured in your OpenEBS cluster, follow the steps presented [here](https://docs.openebs.io/docs/next/configurepools.html?__hstc=216392137.adc0011a00126e4785bfdeb5ec4f8c03.1580115966430.1580115966430.1580115966430.1&amp;__hssc=216392137.1.1580115966431&amp;__hsfp=818904025). As PostgreSQL is a StatefulSet application, it requires a single storage replication factor. If you prefer additional redundancy, you can always increase the replica count to 3.
1. If cStor Pool is not configured in your OpenEBS cluster, follow the steps presented [here](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools?__hstc=216392137.adc0011a00126e4785bfdeb5ec4f8c03.1580115966430.1580115966430.1580115966430.1&amp;__hssc=216392137.1.1580115966431&amp;__hsfp=818904025). As PostgreSQL is a StatefulSet application, it requires a single storage replication factor. If you prefer additional redundancy, you can always increase the replica count to 3.
During cStor Pool creation, make sure that the maxPools parameter is set to >=3. If a cStor pool is already configured, move on to the next step. A sample YAML named openebs-config.yaml can be used for configuring cStor Pool and is provided in the Configuration details below.

**openebs-config.yaml**
Expand Down Expand Up @@ -127,7 +127,7 @@ During cStor Pool creation, make sure that the maxPools parameter is set to >=3.

**Create the Storage Class**

1. You must configure a StorageClass to provision a cStor volume on a cStor pool. In this solution, we use a StorageClass to consume the cStor Pool. This is created using external disks attached on the Nodes. The storage pool is created using the steps provided in the [Configure StoragePool](https://docs.openebs.io/docs/next/configurepools.html?__hstc=216392137.adc0011a00126e4785bfdeb5ec4f8c03.1580115966430.1580115966430.1580115966430.1&amp;__hssc=216392137.1.1580115966431&amp;__hsfp=818904025) section. In this solution, PostgreSQL is a deployment. Because this requires replication at the storage level, the cStor volume replicaCount is 3. A sample YAML named openebs-sc-pg.yaml used to consume the cStor pool with a cStorVolume Replica count of 3 is provided in the configuration details below.
1. You must configure a StorageClass to provision a cStor volume on a cStor pool. In this solution, we use a StorageClass to consume the cStor Pool. This is created using external disks attached on the Nodes. The storage pool is created using the steps provided in the [Configure StoragePool](/docs/deprecated/spc-based-cstor#creating-cStor-storage-pools?__hstc=216392137.adc0011a00126e4785bfdeb5ec4f8c03.1580115966430.1580115966430.1580115966430.1&amp;__hssc=216392137.1.1580115966431&amp;__hsfp=818904025) section. In this solution, PostgreSQL is a deployment. Because this requires replication at the storage level, the cStor volume replicaCount is 3. A sample YAML named openebs-sc-pg.yaml used to consume the cStor pool with a cStorVolume Replica count of 3 is provided in the configuration details below.

**openebs-sc-pg.yaml**

Expand Down
2 changes: 1 addition & 1 deletion website/src/posts.json

Large diffs are not rendered by default.