I'm also new to RNAseq but if your datzsets are Illumina-derived you should check out https://basespace.illumina.com and the RNAseq associated apps they have there - some are free and some are not so hopefully there's something convenient. The program called TopHat might be useful if you are dealing with human, mouse or rat datasets. The apps are quite easy to use since they're usually constrained to dealing with the Illumina fastq file outputs.
There is quite a jungle of types of software and types of analysis that can be done, depending on what you want to find. The first approximation of analysis is typically finding the differential expression on the gene level - in order to find top changing genes, targets for further research.
This is done by getting the counts of reads in genes (recent featureCount software is fast and pretty simple, starts from alignment bam files) and running statistical tests for differential expression (DESeq or edgeR from Bioconductor - in basic mode ca 4-5 lines of R script).
The details on gene-level count-based differential expression are eg here: http://www.nature.com/nprot/journal/v8/n9/abs/nprot.2013.099.html
Then - depending on what you need, you can go for transcript discovery (cufflinks..), isoform deconvolution, splicing analysis or novel expression regions search.
You can also at any stage zoom-in your favourite genes/genomic regions in IGV in order to see transcription changes "by eye".
The software you use and strategy you implement will depend on whether you have a reference genome sequence available. If you do, the RNA-Seq reads can be aligned to it and differential expression analyses can be conducted. If you don't, you might consider performing a de novo transcriptome assembly (depending on the number of reads you have) using your RNA-Seq reads. Once that is completed, you can align the RNA-Seq reads back to the de novo transcriptome assembly to quantify expression and test for differences between treatments.
Some useful software for analysing RNA-Seq data: FastQC for assessing quality, Trimmomatic for trimming reads, Bowtie2 for alignments, CD-HIT for clustering similar contigs, Trinity for de novo transcriptome assembly. I would suggest also becoming familiar with linux, which has many built in tools that are very useful (e.g. sed, awk, grep), the R statistical language and the python programming language.
The programs identified here are the right kinds of pipelines. As a beginner, you might find it easy to use the Galaxy website to put your pipelines together - it has the typical tools built into a web page, and can be easier to get to grips with than running the command separately, if you are not accustomed to the command line.
I suppose that your question was related also with the "user-friendly" software. If you are not familiar with Linux or R and you are mainly a Windoze user, you can try this software called ReadXplorer. It is a software developed by Goessman group in Germany, and the reference paper has been recently published in Bioinformatics. It is a JAVA-based software portable for Windoze and Mak.
Hi, I used AIR software (the one recomended from Walter) on my data, and it gives me back the results in less than 4 hours. It's very user friendly and fast. I suggest you to use it. In fact you don't need any prior knowledge of programming and it generates reproducible, sensitive and accurate results, with beautiful graphics in an easy way to interpret them. I was able to confirm all the results using different applications on Linux and R, and in my opinion is the best software both for beginners and for those who already have knowledge of Linux, R, etc.