However, I would choose one language that suits your needs and stick to it, two different frameworks also means twice the problems (more updates, API changes, installation overhead, etc.). People who want to use your work need to know two languages and roll out a larger setup ...
You could replace some of R's plotting capabilities with matplotlib, or if you need many R packages you should stick to R.
However, I would choose one language that suits your needs and stick to it, two different frameworks also means twice the problems (more updates, API changes, installation overhead, etc.). People who want to use your work need to know two languages and roll out a larger setup ...
You could replace some of R's plotting capabilities with matplotlib, or if you need many R packages you should stick to R.
I'd figure out what I wanted to do first, see which language implements all of the functionality better, and then try to avoid mixing languages if possible. This tends to break workflows. I personally always prefer python; however, we do use R and Python together to build automatic repords. Check out RStudio btw
It depends a lot on what you want to do. If most of your functionality is in R, then try to use that wherever possible and vice versa. If you can use only that is probably ideal.
I work most of my pipelines through linux shell scripting that can invoke Perl (my primary), Python & R as needed. I mostly establish my scripts to look for output files with specific extensions generated by the previous script that was invoked to avoid trying to get the languages to directly interact. It can be a little clunky, but seems to work well overall.
I am also using Python (Biopython) for processing output data from other programs (e.g. Blast). I parse Fasta sequences or Blast results in Python (using Biopython) and subsequently use R to do statistical analyses or Plots.
The only thing you need to consider is to make a tabular text output (or .csv) for easy parsing in R.
I know there are multiple libraries for processing Fasta sequences and Blast outputs, but I really like the simple and fast approach of Python.
Actually speaking, R and python have a very distinct background and history. R has originality started to include many subroutines written by FORTRAN that aimed to deal with real numbers while python has an origin in script languages that aimed to deal with characters. This is also the reason why Bioconductor was implemented using R, since it aimed to deal with microarray data that consists of real numbers, while Biopython was more specialists to deal with sequences. Of course, these two have become more or less to look similar since they were extended to cover weak points. Thus, if you mainly make use of numbers (e.g. microarray data), R is better while if you use sequences python will be better. If you use both by the same frequency (e.g.,RNA-seq), it would be nice for you to get used to both.
Jupyter project gonna make it easy to mix R and python. Even now its possible to mix R and python code chunks in a single jupyter (ipython) notebook. But it still not as good as I want it to be. So I am still using intermediate csv file to go from python to R and vise versa.
You can use a Beaker notebook (http://beakernotebook.com). It is a notebook where you can mix languages. Besides, you can share variables between them. It is the best solution if you need to mix both languages into your workflow.