Skip to content

Commit

Permalink
Removed standard profile info
Browse files Browse the repository at this point in the history
  • Loading branch information
drpatelh committed Dec 14, 2018
1 parent 683da1d commit 30f7f1d
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 23 deletions.
23 changes: 4 additions & 19 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ To start using the nf-core/atacseq pipeline, follow the steps below:
* [Software deps: Bioconda](#32-software-deps-bioconda)
* [Configuration profiles](#33-configuration-profiles)
4. [Reference genomes](#4-reference-genomes)
5. [Appendices](#5-appendices)
* [Running on UPPMAX](#running-on-uppmax)

## 1) Install NextFlow
Nextflow runs on most POSIX systems (Linux, Mac OSX etc). It can be installed by running the following commands:
Expand Down Expand Up @@ -60,7 +58,7 @@ export NXF_OFFLINE='TRUE'
If you would like to make changes to the pipeline, it's best to make a fork on GitHub and then clone the files. Once cloned you can run the pipeline directly as above.

## 3) Pipeline configuration
By default, the pipeline runs with the `standard` configuration profile. This uses a number of sensible defaults for process requirements and is suitable for running on a simple (if powerful!) basic server. You can see this configuration in [`conf/base.config`](../conf/base.config).
By default, the pipeline loads a basic server configuration [`conf/base.config`](../conf/base.config). This uses a number of sensible defaults for process requirements and is suitable for running on a simple (if powerful!) basic server.

Be warned of two important points about this default configuration:

Expand All @@ -72,11 +70,11 @@ Be warned of two important points about this default configuration:
#### 3.1) Software deps: Docker
First, install docker on your system: [Docker Installation Instructions](https://docs.docker.com/engine/installation/)

Then, running the pipeline with the option `-profile standard,docker` tells Nextflow to enable Docker for this run. An image containing all of the software requirements will be automatically fetched and used from dockerhub (https://hub.docker.com/r/nfcore/atacseq).
Then, running the pipeline with the option `-profile docker` tells Nextflow to enable Docker for this run. An image containing all of the software requirements will be automatically fetched and used from dockerhub (https://hub.docker.com/r/nfcore/atacseq).

#### 3.1) Software deps: Singularity
If you're not able to use Docker then [Singularity](http://singularity.lbl.gov/) is a great alternative.
The process is very similar: running the pipeline with the option `-profile standard,singularity` tells Nextflow to enable singularity for this run. An image containing all of the software requirements will be automatically fetched and used from singularity hub.
The process is very similar: running the pipeline with the option `-profile singularity` tells Nextflow to enable singularity for this run. An image containing all of the software requirements will be automatically fetched and used from singularity hub.

If running offline with Singularity, you'll need to download and transfer the Singularity image first:

Expand All @@ -96,7 +94,7 @@ Remember to pull updated versions of the singularity image if you update the pip
If you're not able to use Docker _or_ Singularity, you can instead use conda to manage the software requirements.
This is slower and less reproducible than the above, but is still better than having to install all requirements yourself!
The pipeline ships with a conda environment file and nextflow has built-in support for this.
To use it first ensure that you have conda installed (we recommend [miniconda](https://conda.io/miniconda.html)), then follow the same pattern as above and use the flag `-profile standard,conda`
To use it first ensure that you have conda installed (we recommend [miniconda](https://conda.io/miniconda.html)), then follow the same pattern as above and use the flag `-profile conda`

#### 3.3) Configuration profiles

Expand All @@ -105,16 +103,3 @@ See [`docs/configuration/adding_your_own.md`](configuration/adding_your_own.md)
## 4) Reference genomes

See [`docs/configuration/reference_genomes.md`](configuration/reference_genomes.md)

## 5) Appendices

#### Running on UPPMAX
To run the pipeline on the [Swedish UPPMAX](https://www.uppmax.uu.se/) clusters (`rackham`, `irma`, `bianca` etc), use the command line flag `-profile uppmax`. This tells Nextflow to submit jobs using the SLURM job executor with Singularity for software dependencies.

Note that you will need to specify your UPPMAX project ID when running a pipeline. To do this, use the command line flag `--project <project_ID>`. The pipeline will exit with an error message if you try to run it pipeline with the default UPPMAX config profile without a project.

**Optional Extra:** To avoid having to specify your project every time you run Nextflow, you can add it to your personal Nextflow config file instead. Add this line to `~/.nextflow/config`:

```nextflow
params.project = 'project_ID' // eg. b2017123
```
4 changes: 2 additions & 2 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ NXF_OPTS='-Xms1g -Xmx4g'
## Running the pipeline
The typical command for running the pipeline is as follows:
```bash
nextflow run nf-core/atacseq --design design.csv --genome GRCh37 -profile standard,docker
nextflow run nf-core/atacseq --design design.csv --genome GRCh37 -profile docker
```

This will launch the pipeline with the `docker` configuration profile. See below for more information about profiles.
Expand Down Expand Up @@ -102,7 +102,7 @@ This version number will be logged in reports when you run the pipeline, so that
## Main arguments

### `-profile`
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: `-profile standard,docker` - the order of arguments is important!
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: `-profile docker` - the order of arguments is important!

If `-profile` is not specified at all the pipeline will be run locally and expects all software to be installed and available on the `PATH`.

Expand Down
4 changes: 2 additions & 2 deletions main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,14 @@ def helpMessage() {
The typical command for running the pipeline is as follows:
nextflow run nf-core/atacseq --design design.csv --genome GRCh37 -profile standard,docker
nextflow run nf-core/atacseq --design design.csv --genome GRCh37 -profile docker
Mandatory arguments:
--design Comma-separted file containing information about the samples in the experiment (see docs/usage.md)
--fasta Path to Fasta reference. Not mandatory when using reference in iGenomes config via --genome
--gtf Path to GTF file in Ensembl format. Not mandatory when using reference in iGenomes config via --genome
-profile Configuration profile to use. Can use multiple (comma separated)
Available: standard, conda, docker, singularity, awsbatch, test
Available: conda, docker, singularity, awsbatch, test
Generic
--genome Name of iGenomes reference
Expand Down

0 comments on commit 30f7f1d

Please sign in to comment.