Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chia Stability Update #343

Merged
merged 94 commits into from
Oct 18, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
94 commits
Select commit Hold shift + click to select a range
6fdbe70
Add REMOTE_LOCATION= variable
88plug Nov 24, 2021
0cb52d5
Fix up quotes
88plug Dec 8, 2021
be5537f
Bump version
88plug Dec 8, 2021
1ba63cd
Update to new version
88plug Dec 17, 2021
b8044a8
Bump version
88plug Dec 17, 2021
e276179
Merge branch 'master' into chia
88plug Jan 20, 2022
561c920
Add support for local plotting and download links
88plug Feb 4, 2022
c1efde6
Add file manager to REMOTE_LOCATION=local
88plug Feb 10, 2022
b6d667e
Add delete
88plug Feb 16, 2022
cb9af54
Add bladebit tested with persistent stoage
88plug Feb 28, 2022
05efdc3
Bigger limits
88plug Mar 1, 2022
b1c32da
Thread count fix
88plug Mar 3, 2022
678d51d
Thread count
88plug Mar 3, 2022
aa28d05
Use default 10 threads if none specified
88plug Mar 3, 2022
b995987
Cleanup documentation
88plug Mar 3, 2022
6b2356e
Update log instructions
88plug Mar 3, 2022
05a08f3
Merge branch 'master' into chia
88plug Mar 3, 2022
085f238
Fix attribute usage
88plug Mar 3, 2022
361b577
-Fixes madmax error with TMPDIR not being set properly
88plug Mar 10, 2022
ea11fc1
Update Chia branch to latest master
88plug Mar 10, 2022
9434bc5
Remove imagePullPolicy from SDL
Mar 11, 2022
dfa699e
Add signedBy and version
Mar 16, 2022
82e169c
Update README
Mar 16, 2022
4761b4b
Max 7 plots
88plug Mar 30, 2022
afe343a
Fix ssh check on upload and limit max plots
88plug Mar 31, 2022
6ebe481
Fixes runtime error with python3.9
88plug Apr 9, 2022
ff17cc8
Upload background
88plug Apr 10, 2022
85052e4
Multi threaded uploading now enabled
88plug Apr 10, 2022
7de5e32
Upload background and new deployment size
88plug Apr 13, 2022
f4b2f1f
Merge branch 'master' into chia
88plug Apr 14, 2022
521b5c8
Fix up indentation
88plug Apr 14, 2022
59cc938
Fixes compile issue
88plug Apr 20, 2022
c7d5e51
rclone support
May 21, 2022
59bb5af
Background rclone move
May 21, 2022
c3c965c
Supports TOTAL_UPLOADS and TOTAL_PLOTS
May 22, 2022
2633f33
Support for JSON server
May 22, 2022
b4a3018
rclone tested
May 22, 2022
a732528
Logging fix
May 23, 2022
b4a8199
Add curl timeout
May 24, 2022
24f3983
Remove no destination check
May 27, 2022
050caa5
Cleanup formatting
May 27, 2022
f76b77a
Fix CHECK_PLOTS loop
May 29, 2022
d9ae35e
Add configs and benchmarks
Jun 6, 2022
7535240
Formatting
88plug Jun 6, 2022
9b92543
Add support for Summer Sale
Jun 16, 2022
6846b9c
Cleanup for version 200
Jun 16, 2022
79c7bde
Merge branch 'master' into chia
Jun 16, 2022
688e2fb
Fix small bug in review
Jun 16, 2022
9aa48df
Delete npm-debug.log
88plug Jun 16, 2022
0fb8f16
Merge branch 'master' into chia
Jun 17, 2022
b21ad02
Delete npm-debug.log
88plug Jun 16, 2022
caf9c5f
Merge branch 'master' into chia
Jul 3, 2022
6a9df79
Updated Alpha dirs
Jul 3, 2022
6be8478
Fix SSH upload
Jul 4, 2022
e186dd3
Revert "Fix SSH upload"
Jul 4, 2022
04a5c4f
Cleanup rclone commands
Jul 6, 2022
6baeeb6
Update curl timeouts for API
Jul 8, 2022
ca275f6
Better curl error handling
Jul 8, 2022
0acda5c
Cleanup api calls
Jul 11, 2022
f0c0f42
Make ramdrive folder
Jul 12, 2022
be2d605
Add support for public rclone shuffle directory and endpoints
Jul 19, 2022
866263c
Shuffle fixes
Jul 19, 2022
7b22423
Add support for Gdrive
Jul 27, 2022
ecddd91
TOTAL_PLOTS now works
Aug 1, 2022
d702f12
TOTAL_PLOTS fix
Aug 1, 2022
b34644d
Seperate Chia plotters -
Aug 11, 2022
13ca867
Remove original Chia folder
Aug 11, 2022
318c23a
Merge remote-tracking branch 'origin/master' into chia
Aug 11, 2022
5de8eba
Updated pricing
Aug 11, 2022
29c7894
Minor fixes / README updates
Aug 12, 2022
99ac5c0
Use Ubuntu 22.04
Aug 12, 2022
91034de
Merge remote-tracking branch 'origin/master' into chia
Aug 12, 2022
ab36a8c
Bring image up to date
Aug 12, 2022
c9bdf01
Bump version to 262
Sep 12, 2022
6895efe
Merge branch 'master' into chia
Sep 12, 2022
5241f7a
Merge branch 'master' into chia
88plug Sep 23, 2022
894e5de
Update version to 0.16.0
88plug Sep 23, 2022
9baab5c
Bump version
88plug Sep 23, 2022
2dc890b
Support for Bladebit v2.0.0-beta1
88plug Sep 24, 2022
ce87f40
Support BUCKETS= across images
88plug Sep 24, 2022
395a4d1
Merge branch 'master' into chia
88plug Sep 24, 2022
78c1156
Final for Alfa
88plug Sep 25, 2022
16edfda
Alfa
88plug Sep 26, 2022
98e813b
Move to use binaries
88plug Sep 26, 2022
4570a0a
Check for DNS on start and fix bladebit run
88plug Sep 27, 2022
04ac694
Version 302
88plug Sep 27, 2022
fa0567f
Cleanup Dockerfile
88plug Sep 27, 2022
cbe8a7c
Warm memory
88plug Sep 27, 2022
43349f5
Merge remote-tracking branch 'origin/master' into chia
88plug Sep 27, 2022
13a714a
Cleanup curl commands
88plug Sep 28, 2022
5c79688
Remove fail from curl
88plug Sep 30, 2022
da50462
Remove warm start
88plug Sep 30, 2022
9af07bc
Perfect timeouts
88plug Oct 7, 2022
550e1d3
Finalize 316
88plug Oct 18, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion chia-bladebit-disk/deploy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ version: "2.0"

services:
chia:
image: cryptoandcoffee/akash-chia:303
image: cryptoandcoffee/akash-chia:316
expose:
- port: 8080
as: 80
Expand Down
8 changes: 4 additions & 4 deletions chia-bladebit-disk/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -264,9 +264,9 @@ if [ ! -z $PLOTTER ]; then
elif [[ ${PLOTTER} == "madmax-ramdrive" ]]; then
madmax -k $PLOT_SIZE -n $COUNT -r $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -2 /mnt/ram/ -d $FINALDIR -u $BUCKETS $PORT
elif [[ ${PLOTTER} == "bladebit" ]]; then
bladebit -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
bladebit -w -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
elif [[ ${PLOTTER} == "bladebit-disk" ]]; then
bladebit-disk -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
bladebit-disk -w -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
else
madmax -k $PLOT_SIZE -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -d $FINALDIR -u $BUCKETS $PORT
fi
Expand All @@ -278,9 +278,9 @@ if [ ! -z $PLOTTER ]; then
elif [[ ${PLOTTER} == "madmax-ramdrive" ]]; then
madmax -k $PLOT_SIZE -n $COUNT -r $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -2 /mnt/ram/ -d $FINALDIR -u $BUCKETS $PORT
elif [[ ${PLOTTER} == "bladebit" ]]; then
bladebit -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
bladebit -w -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
elif [[ ${PLOTTER} == "bladebit-disk" ]]; then
bladebit-disk -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
bladebit-disk -w -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
else
madmax -k $PLOT_SIZE -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -d $FINALDIR -u $BUCKETS $PORT
fi
Expand Down
18 changes: 9 additions & 9 deletions chia-bladebit-disk/sync_rclone.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ rm /plots/failed.log
#Run once to test upload

for (( ; ; )); do
curl --retry-all-errors $JSON_SERVER > api_plots.log
curl -s --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 $JSON_SERVER > api_plots.log
files=$(ls -la /plots/*.plot | awk '{print $9}')
count=$(ls -la /plots/*.plot | wc -l)

Expand Down Expand Up @@ -38,7 +38,7 @@ for (( ; ; )); do
#nohup rclone --contimeout 60s --timeout 300s --low-level-retries 10 --retries 99 -P --transfers=1 --fast-list --tpslimit=1 --bwlimit 100000000000000000000000 --dropbox-chunk-size=150M move $i $ENDPOINT_LOCATION:$ENDPOINT_DIR >>$i.log 2>&1 &
echo $i >>/plots/pending.log
START_TIME=$(date +%s)
curl --retry-all-errors -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
curl -s --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
elif [[ $SHUFFLE_RCLONE_ENDPOINT == true ]]; then
#Uses same directory name
ENDPOINT_LOCATION=$(cat /root/.config/rclone/rclone.conf | grep "\[" | sort | uniq | shuf | tail -n1 | sed 's/[][]//g')
Expand All @@ -63,18 +63,17 @@ for (( ; ; )); do
nohup rclone --retries 99 --contimeout 60s --timeout 300s --low-level-retries 10 --retries 99 --dropbox-chunk-size 150M --drive-chunk-size 256M --progress move $i $ENDPOINT_LOCATION:/$ENDPOINT_DIR >>$i.log 2>&1 &
START_TIME=$(date +%s)
if [[ $JSON_SERVER != "" ]]; then
curl --retry-all-errors -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
curl --connect-timeout 2 --retry 99 --retry-delay 2 -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
fi
fi

fi

done

sleep 15

if [[ $ALPHA == true ]]; then
for i in $pending; do
sleep 5
FINISHED=100
progress=$(cat $i.log | grep -o -P '(?<=GiB, ).*(?=%,)' | tail -n1)
speed=$(cat $i.log | grep -o -P '(?<=%, ).*(?= ETA)' | tail -n1 | sed 's/.$//')
Expand All @@ -93,23 +92,24 @@ for (( ; ; )); do

if [[ $result != "upload_complete" && $progress == "100" ]]; then
END=$(date +%s)
curl --retry-all-errors -d "progress=upload_complete" -d "finish_time=$END" -X PATCH $JSON_SERVER/$id
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "progress=upload_complete" -d "finish_time=$END" -X PATCH $JSON_SERVER/$id
rm $i.log
sed -i "s|$i||g" /plots/pending.log
sed -i '/^$/d;s/[[:blank:]]//g' /plots/pending.log
fi

else
curl --retry-all-errors -d "total_time=$TOTAL_TIME" -d "progress=$progress" -d "speed=$speed" -X PATCH $JSON_SERVER/$id
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "total_time=$TOTAL_TIME" -d "progress=$progress" -d "speed=$speed" -X PATCH $JSON_SERVER/$id
fi

else
echo "No progress or id"
result=$(cat api_plots.log | jq -r '.[] | select(.filename == "'"$i"'").progress')
if [[ $result != "Possible error detected in logs" ]]; then
echo "Updating the API with the error"
error=$(cat $i.log | grep ERROR | head -n1)
curl --retry-all-errors -d "error=$error" -X PATCH $JSON_SERVER/$id
#error=$(cat $i.log | grep ERROR | head -n1)
error="ERROR"
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "error=$error" -X PATCH $JSON_SERVER/$id
fi
fi
done
Expand Down
2 changes: 1 addition & 1 deletion chia-bladebit/deploy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ version: "2.0"

services:
chia:
image: cryptoandcoffee/akash-chia:303
image: cryptoandcoffee/akash-chia:316
expose:
- port: 8080
as: 80
Expand Down
8 changes: 4 additions & 4 deletions chia-bladebit/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ fi

if [[ $RCLONE == "true" && $JSON_SERVER != "" ]]; then

CHECK_PLOTS=$(curl --retry-all-errors --head -s "$JSON_SERVER?_page=1&_limit=1" | grep X-Total-Count | awk '{print $2}' | head -n1)
CHECK_PLOTS=$(curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 --head "$JSON_SERVER?_page=1&_limit=1" | grep X-Total-Count | awk '{print $2}' | head -n1)
CHECK_PLOTS="${CHECK_PLOTS%$'\r'}"
if (( $(bc <<<"$CHECK_PLOTS >= $TOTAL_PLOTS") )); then
echo "KILL"
Expand Down Expand Up @@ -100,8 +100,8 @@ else
echo "###################################################################################################"
echo "This deployment can create a total of $STORAGE_PLOTS plots on ${STORAGE_UNITS}Gi available storage "
echo "requested without stopping. If this number doesn\'t look right - you need to update the CPU_UNITS, "
echo "MEMORY_UNITS, STORAGE_UNITS to match the units requested in the SDL. Sleeping 30 seconds. "
sleep 30
echo "MEMORY_UNITS, STORAGE_UNITS to match the units requested in the SDL. Sleeping 5 seconds. "
sleep 5
fi

if [[ "$FINAL_LOCATION" == "local" ]]; then
Expand Down Expand Up @@ -235,7 +235,7 @@ if [ ! -z $PLOTTER ]; then
fi

if [[ $JSON_SERVER != "" && $TOTAL_PLOTS != "" ]]; then #Count plots with server
CHECK_PLOTS=$(curl --retry-all-errors --head -s "$JSON_SERVER?_page=1&_limit=1" | grep X-Total-Count | awk '{print $2}' | head -n1)
CHECK_PLOTS=$(curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 --head "$JSON_SERVER?_page=1&_limit=1" | grep X-Total-Count | awk '{print $2}' | head -n1)
CHECK_PLOTS=${CHECK_PLOTS%$'\r'}
if (( $(bc <<<"$CHECK_PLOTS >= $TOTAL_PLOTS") )); then
echo "KILL"
Expand Down
18 changes: 9 additions & 9 deletions chia-bladebit/sync_rclone.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ rm /plots/failed.log
#Run once to test upload

for (( ; ; )); do
curl --retry-all-errors $JSON_SERVER > api_plots.log
curl -s --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 $JSON_SERVER > api_plots.log
files=$(ls -la /plots/*.plot | awk '{print $9}')
count=$(ls -la /plots/*.plot | wc -l)

Expand Down Expand Up @@ -38,7 +38,7 @@ for (( ; ; )); do
#nohup rclone --contimeout 60s --timeout 300s --low-level-retries 10 --retries 99 -P --transfers=1 --fast-list --tpslimit=1 --bwlimit 100000000000000000000000 --dropbox-chunk-size=150M move $i $ENDPOINT_LOCATION:$ENDPOINT_DIR >>$i.log 2>&1 &
echo $i >>/plots/pending.log
START_TIME=$(date +%s)
curl --retry-all-errors -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
curl -s --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
elif [[ $SHUFFLE_RCLONE_ENDPOINT == true ]]; then
#Uses same directory name
ENDPOINT_LOCATION=$(cat /root/.config/rclone/rclone.conf | grep "\[" | sort | uniq | shuf | tail -n1 | sed 's/[][]//g')
Expand All @@ -63,18 +63,17 @@ for (( ; ; )); do
nohup rclone --retries 99 --contimeout 60s --timeout 300s --low-level-retries 10 --retries 99 --dropbox-chunk-size 150M --drive-chunk-size 256M --progress move $i $ENDPOINT_LOCATION:/$ENDPOINT_DIR >>$i.log 2>&1 &
START_TIME=$(date +%s)
if [[ $JSON_SERVER != "" ]]; then
curl --retry-all-errors -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
curl --connect-timeout 2 --retry 99 --retry-delay 2 -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
fi
fi

fi

done

sleep 15

if [[ $ALPHA == true ]]; then
for i in $pending; do
sleep 5
FINISHED=100
progress=$(cat $i.log | grep -o -P '(?<=GiB, ).*(?=%,)' | tail -n1)
speed=$(cat $i.log | grep -o -P '(?<=%, ).*(?= ETA)' | tail -n1 | sed 's/.$//')
Expand All @@ -93,23 +92,24 @@ for (( ; ; )); do

if [[ $result != "upload_complete" && $progress == "100" ]]; then
END=$(date +%s)
curl --retry-all-errors -d "progress=upload_complete" -d "finish_time=$END" -X PATCH $JSON_SERVER/$id
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "progress=upload_complete" -d "finish_time=$END" -X PATCH $JSON_SERVER/$id
rm $i.log
sed -i "s|$i||g" /plots/pending.log
sed -i '/^$/d;s/[[:blank:]]//g' /plots/pending.log
fi

else
curl --retry-all-errors -d "total_time=$TOTAL_TIME" -d "progress=$progress" -d "speed=$speed" -X PATCH $JSON_SERVER/$id
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "total_time=$TOTAL_TIME" -d "progress=$progress" -d "speed=$speed" -X PATCH $JSON_SERVER/$id
fi

else
echo "No progress or id"
result=$(cat api_plots.log | jq -r '.[] | select(.filename == "'"$i"'").progress')
if [[ $result != "Possible error detected in logs" ]]; then
echo "Updating the API with the error"
error=$(cat $i.log | grep ERROR | head -n1)
curl --retry-all-errors -d "error=$error" -X PATCH $JSON_SERVER/$id
#error=$(cat $i.log | grep ERROR | head -n1)
error="ERROR"
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "error=$error" -X PATCH $JSON_SERVER/$id
fi
fi
done
Expand Down
2 changes: 1 addition & 1 deletion chia-madmax/deploy.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ version: "2.0"

services:
chia:
image: cryptoandcoffee/akash-chia:303
image: cryptoandcoffee/akash-chia:316
expose:
- port: 8080
as: 80
Expand Down
8 changes: 4 additions & 4 deletions chia-madmax/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -264,9 +264,9 @@ if [ ! -z $PLOTTER ]; then
elif [[ ${PLOTTER} == "madmax-ramdrive" ]]; then
madmax -k $PLOT_SIZE -n $COUNT -r $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -2 /mnt/ram/ -d $FINALDIR -u $BUCKETS $PORT
elif [[ ${PLOTTER} == "bladebit" ]]; then
bladebit -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
bladebit -w -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
elif [[ ${PLOTTER} == "bladebit-disk" ]]; then
bladebit-disk -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
bladebit-disk -w -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
else
madmax -k $PLOT_SIZE -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -d $FINALDIR -u $BUCKETS $PORT
fi
Expand All @@ -278,9 +278,9 @@ if [ ! -z $PLOTTER ]; then
elif [[ ${PLOTTER} == "madmax-ramdrive" ]]; then
madmax -k $PLOT_SIZE -n $COUNT -r $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -2 /mnt/ram/ -d $FINALDIR -u $BUCKETS $PORT
elif [[ ${PLOTTER} == "bladebit" ]]; then
bladebit -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
bladebit -w -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY $FINALDIR
elif [[ ${PLOTTER} == "bladebit-disk" ]]; then
bladebit-disk -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
bladebit-disk -w -t $CPU_UNITS -f $FARMERKEY -c $CONTRACT diskplot -b $BUCKETS -t1 $TMPDIR --cache $RAMCACHE -a $FINALDIR
else
madmax -k $PLOT_SIZE -n $COUNT -t $CPU_UNITS -c $CONTRACT -f $FARMERKEY -t $TMPDIR -d $FINALDIR -u $BUCKETS $PORT
fi
Expand Down
18 changes: 9 additions & 9 deletions chia-madmax/sync_rclone.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ rm /plots/failed.log
#Run once to test upload

for (( ; ; )); do
curl --retry-all-errors $JSON_SERVER > api_plots.log
curl -s --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 $JSON_SERVER > api_plots.log
files=$(ls -la /plots/*.plot | awk '{print $9}')
count=$(ls -la /plots/*.plot | wc -l)

Expand Down Expand Up @@ -38,7 +38,7 @@ for (( ; ; )); do
#nohup rclone --contimeout 60s --timeout 300s --low-level-retries 10 --retries 99 -P --transfers=1 --fast-list --tpslimit=1 --bwlimit 100000000000000000000000 --dropbox-chunk-size=150M move $i $ENDPOINT_LOCATION:$ENDPOINT_DIR >>$i.log 2>&1 &
echo $i >>/plots/pending.log
START_TIME=$(date +%s)
curl --retry-all-errors -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
curl -s --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
elif [[ $SHUFFLE_RCLONE_ENDPOINT == true ]]; then
#Uses same directory name
ENDPOINT_LOCATION=$(cat /root/.config/rclone/rclone.conf | grep "\[" | sort | uniq | shuf | tail -n1 | sed 's/[][]//g')
Expand All @@ -63,18 +63,17 @@ for (( ; ; )); do
nohup rclone --retries 99 --contimeout 60s --timeout 300s --low-level-retries 10 --retries 99 --dropbox-chunk-size 150M --drive-chunk-size 256M --progress move $i $ENDPOINT_LOCATION:/$ENDPOINT_DIR >>$i.log 2>&1 &
START_TIME=$(date +%s)
if [[ $JSON_SERVER != "" ]]; then
curl --retry-all-errors -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
curl --connect-timeout 2 --retry 99 --retry-delay 2 -d "filename=$i" -d "endpoint_location=$ENDPOINT_LOCATION" -d "endpoint_directory=$ENDPOINT_DIR" -d "start_time=$START_TIME" -d "provider=$AKASH_CLUSTER_PUBLIC_HOSTNAME" -X POST $JSON_SERVER >>$i.log
fi
fi

fi

done

sleep 15

if [[ $ALPHA == true ]]; then
for i in $pending; do
sleep 5
FINISHED=100
progress=$(cat $i.log | grep -o -P '(?<=GiB, ).*(?=%,)' | tail -n1)
speed=$(cat $i.log | grep -o -P '(?<=%, ).*(?= ETA)' | tail -n1 | sed 's/.$//')
Expand All @@ -93,23 +92,24 @@ for (( ; ; )); do

if [[ $result != "upload_complete" && $progress == "100" ]]; then
END=$(date +%s)
curl --retry-all-errors -d "progress=upload_complete" -d "finish_time=$END" -X PATCH $JSON_SERVER/$id
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "progress=upload_complete" -d "finish_time=$END" -X PATCH $JSON_SERVER/$id
rm $i.log
sed -i "s|$i||g" /plots/pending.log
sed -i '/^$/d;s/[[:blank:]]//g' /plots/pending.log
fi

else
curl --retry-all-errors -d "total_time=$TOTAL_TIME" -d "progress=$progress" -d "speed=$speed" -X PATCH $JSON_SERVER/$id
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "total_time=$TOTAL_TIME" -d "progress=$progress" -d "speed=$speed" -X PATCH $JSON_SERVER/$id
fi

else
echo "No progress or id"
result=$(cat api_plots.log | jq -r '.[] | select(.filename == "'"$i"'").progress')
if [[ $result != "Possible error detected in logs" ]]; then
echo "Updating the API with the error"
error=$(cat $i.log | grep ERROR | head -n1)
curl --retry-all-errors -d "error=$error" -X PATCH $JSON_SERVER/$id
#error=$(cat $i.log | grep ERROR | head -n1)
error="ERROR"
curl --connect-timeout 5 --retry 99 --retry-all-errors --retry-delay 5 -d "error=$error" -X PATCH $JSON_SERVER/$id
fi
fi
done
Expand Down