Post Processing
A reusable Nextflow component designed for executing various post-processing tasks on data generated by upstream pipeline steps. It can be used to enhance, transform, or modify outputs, making it versatile for addressing specific requirements or handling errors.
This component is highly configurable, supporting fine-tuned control of computational resources (CPU, memory), containerization, and output management. Users can integrate custom containers (e.g., Python, C#) and scripts to implement their own logic for post-processing, all configured through parameters. Externalized process scripts allow for seamless execution of containerized processes.
Key Features:
Flexibility: Can be applied to a wide range of pipeline steps and data types.
Customizability: Easily adaptable to different post-processing requirements.
Reusability: Can be used in multiple pipelines, reducing development effort.
Error handling: Can be used to address issues or errors in the pipeline.
Data transformation: Can be used to transform or modify output data in various ways.
main.nf
Directives
label: Specifies the computational resources (e.g., CPUs, memory) required by the process. Configured using configuration.json "cpusMemoryConfig"
container: Specifies the Docker container image to be used by the process. Configured using configuration.json "container"
tag: Tags the process for easier identification or logging, Configured using configuration.json "tag".
when: Conditionally executes the process if enabled. Configured using configuration.json "enabled", If this parameter is set to true, the process will run; otherwise, it will be skipped.
Input
inputs: A channel that passes a parent step output directory and a collection of inputs to the process, giving the flexibility to pass multiple inputs
Output
path("${params.self.stepName}/**"): Output path of step name defined in ${params.self.stepName}.
Parent step output directory
Publishing
publishDir: The publishDir directive is used to specify where output files should be saved:
${params.parent.logsIntermediates}: Directory for intermediate files and logs.
${params.parent.results}: Directory for final results.
mode: 'copy': Specifies that the files should be copied to the target directory.
pattern: File pattern to match for publishing (e.g., *.tsv).
enabled: Conditional flag (params.self.publishToResults) to control whether the files should be published.
Script Execution
GroovyShell: A GroovyShell instance is created to evaluate a Groovy script specified by ${params.self.groovyScript}. The script is passed the inputs as a binding, allowing it to dynamically process the input data.
Ref: postscripts/config.groovy
template: The shell script template specified by ${params.self.shellScript} is executed with the processed configuration. It allows user to add post processing logic and spawn a docker container with configuration.json "container"
Ref: postscripts/script.sh
In this example, BAM output is automatically updated into CRAM format to save disk space using the reference genome specified in the analysis.
Usage
To use the PostProcessing process in your Nextflow workflow:
Configuration: Ensure that the necessary parameters (e.g., container, CPU/memory settings, and script paths) are defined in your nextflow.config or passed as command-line parameters.
Execution:
Create a PostProcessing module for your process e.g. https://git.illumina.com/ClinicalGenomics/clinical-pipelines/tree/main/modules/dragen_analysis_post.
Create a Groovy script, shell script and container image to fit the specific needs of your pipeline.
Creating config.groovy
Binding Input Variables: Ensure that all necessary inputs are correctly bound to variables within the Groovy script. These variables should be accessible and easily referenced within script.sh.
Process Return Structure:
Step name directory
Input tuple: tuple of sampleId and parent step directory path.
The process modifies one or multiple files in parent work dir and return the parent step directory.
tuple to return is set in Groovy script, like below
Creating shell.sh
Invoke the containers main process with specified arguments.
Configuration Parameters:
Ensure the params.pipelineConfig.{PARENT}PostProcessing section in your configuration file is properly set up with these parameters.
container: The process utilizes a Docker container to ensure consistent execution across environments.
shellScript: A shell script which calls the container.
groovyScript: A groovy script which binds the process inputs, so they can be accessible in shell script.
publishToResults: To copy output files to Results.
resultsPublishPattern: Files pattern to copy to Results.
Including the Process:
Use the include statement to import the PostProcessing process into your workflow under a custom name. This allows you to easily reference it in different parts of your pipeline. e.g.
Invoke the Process:
The process takes 2 parameters
tuple of sampleId and parent step directory path
NOTE: SampleId can be optional. User can set it to ${sampleId} or "*" for parent process which don't run per sample.
An array of variable inputs, useful to pass multiple parameters to script
Call the included process with the necessary input channels and capture the outputs. e.g.
This approach ensures that the original inputs can be modified or and additional parameters or arguments are clearly organized and accessible for further processing in the workflow.
Last updated
Was this helpful?