There are a lot of packages in the R ecosystem focused on several aspects of Neuroimage processing, many of them listed in the CRAN Task View: Medical Imaging section: oro.nifti
, oro.dicom
, RNifti
, fmri
, divest
(among others) for reading/writing of images; ANTsR
, fslr
for more advanced processing methods; RNiftyReg
specialized in registration, mritc
focused in image segmentation. Many more, for DTI, fMRI, molecular imaging, and visualization are available in CRAN.
However, to get a complete Neuroimaging pipeline working, it is common to rely on several different packages, and, in general, no single package can offer all the functionalities desired. Tracking the requirements and dependencies of a pipeline is time-consuming, since the user has to manually find, for each function used, the packages it depends on.
In addition, the most typical form of building a pipeline in R is via scripting. Once the needed functions are defined or imported from a third-party package, one or more scripts are written in order to get all processes done.
But scripts are error-prone, difficult to trace and debug if the script is large, and require some level of customization when one tries to run the script in a different environment (computer, laboratory…). These cons make us think of defining self-contained workflows, easily shareable, which can perform the same operations as a script but with greater legibility and traceability.
The goal of this package is to provide certain features to manage the situations before:
R
function (including those from other packages, such as ANTsR
or fslr
) as a part of it. At definition time, the package warns about inconsistencies on the flow from inputs to outputs (undefined functions or variables).R
, that is, something that can be computed in runtime. This means that a process in the pipeline can be, for example, a function to generate a plot, or to compute image statistics, or a function that wraps a random forest which has been previously trained to infer a labelling on the input image. The tidyneuro
package provides the tools to extend this functionality to other computations: machine learning or deep learning models, for instance.You can install the development version from the GitHub repo with:
# install.packages("remotes") remotes::install_github("neuroimaginador/tidyneuro")
tidyneuro
approachIn tidyneuro
, a workflow is an ordered collection of processes that convert inputs into outputs, such that the output of a process may be the input to another one.
By defining the appropriate functions, one can model the pipeline in the correct order and obtain a flow as the one depicted in the following figure.
In this case, we have 3 inputs and several outputs. The arrows indicate which inputs are used to compute each of the other outputs.
This is a basic example which shows you how to solve a common problem, the computation of tissue (gray matter, white matter and cephalospinal fluid) volume:
Please note that the tidyneuro project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.