Testing should be done throughout the development cycle. Whether it be unit level or system level. From my perspective, edge case testing is most critical. If input to the function is supposed to be in the range of 1-50, testing with input of 0,1,15,49,50, and 51 should cover all cases.
Developers should *always* write a test case for any new function/ external API that they introduce, QA should be testing what the documentation claims. If the document says supports 50 network connections, then there needs to be a test at 49, 50, and 51 - what happens when we test the maximum?
The more automation that can be introduced into the testing, the *more* testing that can be accomplished. Running things in the background and checking the results is a significantly better use of people than having each test be done manually.
To my experiance testing of software should start from the lower level (generated nytive code), that is full driven by models. Few examples You can find at
https://www.youtube.com/watch?v=pwJjsXJFU0Y
https://www.youtube.com/watch?v=09tnct2AL4g
Testing of software product lines cna be dramatically emproved if level of code generators is extended with "Action reports", that synchronize testing target code, models, modeling languages and client applications for monitoring.
Enclosed PPT files are related to PLC-based software production lines.
Before providing an answer, I have a few questions?
What type of testing are you conducting? Verification or Validation.
What type of software are you testing? Business, Web, Medical, Avionics, etc.
As the others have said start in the small (i.e. unit testing, done during the process) and test to the large (i.e. System testing - Alpha), but without the answer to those 2 questions, it is difficult to answer in other than generalities.
Before providing an answer, I have a few questions?
Did You estimate software product size in SLOC or FP or UCP or in...?
Did You estimate efficiency and efficacy of your testing team for software product type or CMM capability of your testing team?
Answer is given on web site www.bisa.rs and in Ref. Lj. Lazić, S. Milinković, Reducing Software Defects Removal Cost via Design of Experiments using Taguchi Approach, Software Quality Journal, Springer-Verlag New York, Inc., ISSN:0963-9314. and Lj. Lazić, I. Đokić, S. Milinković, „Quantitative Model for Allocation of Resources Based on Success Rate of Software Projects using Design of Experiments“, Proceedings of the 7th European Computing Conference (ECC '13), Dubrovnik, Croatia, June 25-27, 2013.
The reason for the question is the rules for testing change radically depending on the type o software being tested, and although there is a lot of theory Government Regulations such the FDA Final Guidance and the FAA's DO-178-B. Both of these Regulations are failure-result based. I'm equally sure that the EU has a longer list of failure-result based standards.
Before advising on the priority of test cases, it is absolutely necessary to know the type of software. In the two regulation mentioned the priority of all test-cases is the same. Unless, you enjoy fines and jail time.
You can start using combinatorial testing. We did it with good results, please check our list of publications to see concrete solutions. In particular, I recommend you to read a paper presented in the International Symposium in Software Testing, (2014) https://www.researchgate.net/publication/263685443_A_Variability-Based_Testing_Approach_for_Synthesizing_Video_Sequences
Conference Paper A Variability-Based Testing Approach for Synthesizing Video Sequences