Hi all,
I am using four different libraries of read size 76bps, insert sizes are 300bps, 1kb, 8kb and 12kb. Expected genome size is 80MB.
I am running velvet using these four libraries. Actually I tried velvet first for different k-mers and then for the best k-mer
I tried different cov_cutoffs
. In all these assemblies from same k-mer (69)
with different cov_cutoffs
, I used the same Roadmap
and Sequences
files from the initial velveth
run (K-mer 69 and cov_cutoff default
).
Surprisingly I got 10MB of N50
and 23MB of largest scaffold size using cov_cutoff
of 12 (median coverage is 30.76 in Log file) on the previously generated Roadmap and Sequences files. Then later I tried all new assembly from the same reads, K-mer 69
and cov_cutoff 12
, now my N50 is 2MB
and largest scaffold size is 6.78 MB.
Later I tried the same input files and same velvet parameters and I figured out that velveth is generating different Roadmap file for the same k-mer 69, for all three runs. What could be the reason behind this? In this case it is not possible to regenerate the results.
I would really appreciate your comments on this.
Best regards,
Rahul
Thanks for this, but then velvet takes ages to generate assemblies on a single processor. Regarding VelvetOptimiser, I cannot use it in my analysis. Actually I have huge dataset and it consumes ~95% of RAM of our whole group's machine. I need to submit jobs accordingly, in this case manual way of optimizing velvet is the right choice.
Hi SES,
I am currently trying to setup my submission script for VelvetOptimizer on our cluster and was a bit confused as to how the various parameters had to be setup. The plan was to spread the instances over 24 threads, so I initially had the following parameters:
OMP_NUM_THREADS=24
,OPENBLAS_NUM_THREADS=24
,--cpus=24
,--cpus-per-task=24
,mem=256Gb
-t 24
However it appears that this generated thread allocation issues (it maybe tried to allocate 24*24 threads...?), and the program eventually crashed with the following error messages (not sure if the thread allocation actually caused the crash though; if you don't think so, I would appreciate any recommendation :-) ):
Anyway, I then tried to correct this by setting the threads parameters as follows, but things appeared much slower with Velvet calculating only 3 hash values at a time:
OMP_NUM_THREADS=8
,OPENBLAS_NUM_THREADS=8
,--cpus=24
,--cpus-per-task=24
,mem=256Gb
-t 3
So my question is, how do you exactly set the threads parameters in order to gain from the use of OPENMP, and how can Velvet/Oases be optimally parallelised?
Thank you!
It is not a good idea to post a new question in the comment section. On Biostar we like to keep a single topic in thread. In addition very few people see your post so it is also inefficient.