One thing to consider is that agile testing teams are always testing N-1 version of the code, as opposed to the N version - which the development team is working on. Understanding which functions and capabilities are in N-1 v N will be critical for a test team to be successful.
Test cases will have to continually evolve with additional capability in order to test the new function that has been added since the previous iteration.
You must be more specific with your question. This question is too open. With more focused question you will get more usable replies.
However, I will try to outline one possible solution for your question. Let us suppose that we do not know about possible challenges, or have a rough picture about them. In that case, you should conduct a qualitative study to explore and discover challenges. You can use inductive thematic analysis (see Braun and Clarke 2006 - Using thematic analysis in psychology) to develop thematic framework with the challenges, or grounded theory (Charmaz, 2006 - Constructing Grounded Theory: A Practical Guide through Qualitative Analysis) to develop a theory about your topic.
However, if you know some challenges, and you would like to check their presence in the practice you can conduct a quantitative study with a larger population of software experts.
Therefore, define the scope of your research and propose the objective. These decisions will drive your research regarding the selection of the most suitable methods.
In Scrum, at least, there are no testing teams. One key concept of agile is to have short feedback loops, and team boundaries increase latency that makes fast test feedback impossible. So a Scrum team is cross-functional. There are no testers on a Scrum team. Part of self-organisation is that individuals do what needs to be done in the moment, so everyone should be able to jump in and do testing if that is what is required. If they don't have the skill set, that is an impediment and should be remedied with cross-training, training, and so forth.
The Product Owner specifies what are usually called "acceptance tests." More broadly, Scrum tacitly advocates a broader view of testing more in line with how Weinberg characterizes it in "Perfect Software." The purpose of running most tests is to figure out which test to write next. All tests have a human at the center of their feedback cycle, and many tests do not involve a computer. (BTW, this speaks against automated tests. TPS shunned automation in favor of "autonomation.")
The background from this comes from TPS and more deeply from Deming. GM used to spend weeks "testing" and fine-tuning / aligning its cars after they came off the assembly line. At Toyota cars roll off the line and are dispatched to the point of sale: quality comes from the development process rather than an after-the-fact feedback loop. Deming repeatedly emphasized that enterprises with QA organizations tend to deliver lower quality products than those without, and he used to demonstrate this in his famous "Bead Game."
Empirically, Deming has been vindicated. See this article from IEEE about how to raise quality by getting rid of testing teams: http://spectrum.ieee.org/view-from-the-valley/computing/software/yahoos-engineers-move-to-coding-without-a-net . That is just one of many stories that can be told.
TDD - Test Driven Development is a concept used in Agile. The testing team will need to help the developers create scenarios / Test cases to help the developers write code.
First, TDD is a design technique - not a testing technique. Many teams use it. Few understand it.
Send, It has been proven to erode architecture and may reduce quality, for the obvious reason that it focuses on the "units" at the method level and ends building a bottom-up procedural architecture. Research has borne this out empirically. Check Siniaalto and Abrahamsson, Comparative Case Study on the Effect of Test-Driven Development on Program Design and Test Coverage, ESEM 2007; Siniaalto and Abrahamsson, Does Test-Driven Development Improve the Program Code? Alarming results from a Comparative Case Study. Proceedings of Cee-Set 2007, 10 - 12 October, 2007; Janzen and Saledian, “Does Test-Driven Development Really Improve Software Design Quality?” IEEE Software 25(2), March/April 2008, pp. 77 - 84 ("[T]he results didn't support claims for lower coupling and increased cohesion with TDD"); and many of my own publications on the topic.
And, again, good agile practice does not separate testing into a team.
I'm not sure if this answers exactly your question; we organize testing in agile projects as a game among early adopters. This fits continuous delivery. It is not exactly crows-testing as we select testers from promoters - this means, we conduct Net Promoter surveys among the users of our delivery and promote them to testers on different levels
From my research and experience, James is generally correct when he says that Scrum in specific and most other Agile disciplines in general stress unit testing and don't utilize system testing to verify requirements compliance. This is also one of the reasons why software being developed for high-reliability, safety and security critical applications tend not to use pure agile methodologies. What some US organizations are doing is including a trained tester on the on each sprint team. How this individual is used varies widely. If you are using Agile development methodologies as a purist then you won't conduct system test.
Test Driven Design (TDD) is a good design tool, but does not add much to Agile testing practice. TDD doesn't add to the quality of the individual work packages; since it does not guarantee any level of coverage. Where TDD does add value is in the area of usability and independence of evaluation.
Carl, though the rest of what you say about TDD rings true, how does TDD help with usability? It's about code rather than about interfaces. The TDD mantra is that you never write *code* unless you have a test for it; hence, most TDD happens at the unit (method) level.
Some people confuse test-first development and BDD with TDD — maybe you're referring to those?
James, it does sound a bit strange if you view usability in psychological terms, but if you view usability from industrial terms it makes senses. From an industrial view, usability increases as the time to complete a task decreases. in theory, the tests in TDD are designed by the "requesting" user, and from these system tests, unit tests emerge. It is doubtful that an end user can design an effective unit test and the theory implies that the tests are based on system level user tasks. It is this type of test case that is ideal for conducting a performance based test (learning or experience curve). The argument here is how do you define usability, as feeling or performance. Sorry, now I'm getting into religion.
According to that wonderful reference, Wikipedia, BDD is an out growth of TDD. So both would have a basis of a system level user task. Here is a question. How does BDD or TDD differ from scenario based testing? From what I read, the only difference is time order. It also implies that the software is being built from use cases or user stories.
Carl, wikipedia is technically misleading. BDD was a reaction borne out of Dan North's distaste for TDD, which he found to be full of "blind alleys" (http://dannorth.net/introducing-bdd/). Whether it differs from a system-level scenario-based perspective is in the eyes of the beholder. But, like TDD, and in spite of the fact that it employs "tests," it is not a testing technique but rather a design technique.
I don't think BDD requires either use cases or user stories. Both of them are just housekeeping techniques to support a conversation between the using constituency and the development community. I do think that *good* use of BDD is supported by enabled understanding on the part of who ever is writing the tests. That is an issue of dialogue with feedback, though written artifacts can support that process by organizing detail and documenting decisions.