@ Mark: Express the functions A(t) and B(t) in terms of their Fourier integral representations. Writing cos(ω t) as [exp(i ω) + exp(-i ω)]/2, by a simple variable transformation ω can be transposed into the arguments of the Fourier amplitudes of A(t) and B(t), a(ε) and b(ε) respectively. Using the uniqueness theorem, you will then obtain a set of two coupled equations for a(ε) and b(ε). Eliminating b(ε), you will obtain an algebraic expression for a(ε) in terms of a(ε-ω) and a(ε+ω). Clearly, a(ε) parametrically depends on ω. For ω=0 the function a(ε) is trivially solved (which is not a surprise in the light of the original equations in the time domain). For a finite value of ω, you will have to substitute ω by ω + dω, where dω is infinitesimally small, and assume that you know the solution a(ε) corresponding to ω. By using first-order Taylor expansion of the functions of ω + dω around ω, you will obtain a first-order differential equation for a(ε) in its dependence on ω. This differential equation is easily solved, however involves a "constant" which depends on ε, in addition to an undetermined constant (see the next paragraph). This makes that ultimately a(ε) becomes a more complicated function of ε than is the case for ω=0. Once you have a(ε) as a function of ω, you can obtain A(t) by Fourier-back-transforming a(ε) to the t domain.
At the time of writing this note, I have not made my mind, however suspect that the above-mentioned undetermined constant that enters your solution, subsequent to solving the above-mentioned first-order differential equation corresponding to the dependence of a(ε) on ω, will result in a kind of 'dispersion relation' that you will have to solve to self-consistency. That is, for every value of ω you will have to begin with some arbitrary constant, and then by iteration determine the actual value of this constant. Since you know the exact solution corresponding to ω = 0 (the trivial case), it is logical that in determining the above-mentioned 'dispersion relation' you naturally begin from a small value of ω and step-by-step proceed towards larger values of ω. This process may prove to be necessary, even though you may not have any need to know the 'dispersion relation' for all values of ω between 0 and the actual value of ω.
Following my above discussion, consider the function a(ε-ω)+a(ε+ω) that one encounters in the equation for a(ε). One can formally expand a(ε-ω)+a(ε+ω) in powers of ω, leading to 2\sum_{n=1}^{\infty} [a^(2n)(ε)/(2n)!] ω^{2n}, where a^(2n)(ε) stands for the 2n-th derivative of a(ε) with respect to ε. Now expressing a(ε) as the Fourier integral of A(t), the infinite summation with respect to n can be expressed as \int (dt/pi) [cosh[ωt] - 1] exp(-i ε t) A(t), which is achieved by exchanging the orders of summation and integration. This result may be fruitfully employed in the expression for a(ε) in terms of a(ε-ω)+a(ε+ω), since on the left-hand side of this equation one has a(ε), which is the Fourier transform of A(t), and on the right-hand side one has a Fourier integral of A(t) times the well-defined function [cosh[ωt] - 1].
It is not so simple, there is some connections with Mathieu's and Hill's differential equations. Therefore solutions can be expressed in terms of transcendental functions only.
Yakov: Of course, when dealing with differential equations involving trigonometric functions, the Mathieu and the Hill differential equations come into play.
Robert: I admit that I have gone too far in characterizing the solution of the problem at hand as "trivial" (I realized this when I later took recourse to pen and paper). I maintain however that tackling it requires application of the standard and elementary tools of the mathematics of applied physics. In this sense, it is 'trivial'.
The equations may be written in the form dX/dt = M(t) X, where X is (A,B)^T and M(t) is a 2X2 periodic matrix. Floquet analysis may therefore be applied.
A_tt + ik(v+1) A_t + ik[(v+1)cos(wt) - w v sin(wt)] A = 0,
where the subscripts denote derivatives with respect to t. I wonder if this can somehow be rewritten as a Mathieu equation by an appropriate change of variables.
Mark, perhaps you could deal with your equation using Mathematica (to gain some insight into what to expect). I just did it, and here is the relevant Mathematica code:
{k, v, a0, a1, T, w} = {1, 1, 1, 1, 10, 3};
f = First[
A /. NDSolve[{A''[t] + I k (v + 1) A'[t] +
I k ((v + 1) Cos[w t] - w v Sin[w t]) A[t] == 0, A[0] == a0,
@ Sergei: You're right! Thanks for pointing that out. As a result, a transformation like A = exp(L t) X with L a complex constant does not transform the equation into a Mathieu equation. I wonder if there's a different transformation that would work.
The corrected equation has a very regular and simple (numerical) solution, in particular for large values of t. This observation serves as a basis for solving the equation analytically.
I put A(t) = exp(-(t + i kv/w sin(wt))/2) X(t). This yields
4 d^2 X/dt^2 - [ (1 - ikv cos(wt))^2 + 2ikv w sin(wt)] X = 0.
Unfortunately, this is not a Mathieu equation. It is a Hill equation with terms of order 0, 1 and 2. I'm not sure whether anything is known about equations of this kind.
@Mark: I don't know what kind of physics (if any) is behind your system but it has a clear-cut 'symmetry breaking' parameter v (which appears only in the first of two equations). The symmetry is restored only when v=-1 . In this particular case, there is a simple (yet nontrivial) degenerate soluition: B(t)=-A(t) with A(t)=exp[-t+iksin(wt)/w].
@Mark: I just found a mathematical reason for v=-1 being a "special" point (even for nonzero frequencies). When you write your system in a matrix form: d_t X=MX for a two-component vector X=(A,B)^T, the inverse matrix M^{-1} has a common denominator (1+v) that makes v=-1 a singularity. Check it out.
If B(t)=0 the solution is simple. Let be B(t) not equal zero, make the Ansatz u(t)=A(t)/B(t). With the abbreviation a(t) = ik cos(wt) you derived the Riccati equation:
u' = a(t) u^2 + (1-v a(t)) u + 1. This equation can be solved analytical in some cases. The Riccati equation can always be reduced to a second order linear differential equation. Perhaps the resulting second order equation is analytical solvable.
For v=-1 and the subsitution y=u+1 this yields the Bernoulli equation y'=a y^2 + (1-a) y
with the solution y = (1-a) / (exp(-(1-a)t) - a), resp. u=( 1-exp(-(1-a)t) )/ ( exp(-(1-a)t) - a ).
The corresponding second order linear differential equation (to the Riccati equation above), z''(t) + (w tan (wt) - 1 +ikv cos (wt) ) z'(t) + ik cos(wt) z(t) = 0, has been not yet solved and unfortunately is not helpful.
@Frank: you are right. The special case of v=-1 is indeed solvable (the hidden symmetry of the original system and the resulting Riccati equation is working). But your solution of the Bernoulli equation assumes that a(t) is a constant, that is not the case since a(t)=ikcos(wt). The correct solution in this case (with v=-1) reads: y(t)=F(t)/G(t), where F(t)=exp[t-iksin(wt)/w] and G(t)=C-ik*\int^{t}dt"[cos(wt")F(t")] is the integral with t an upper limit (C is an arbitrary constant).
Another idea to solve the equations is the use of an associated conserved quantity. In the special cas v=-1 (see above), is the sum A(t)+B(t)=E=const. such an quantity. This let to the equations A'(t)+( 1-a(t) ) A(t) = E and A'(t) = -B'(t). In more general terms is
(A'(t)-B(t))/(B'(t)+B(t)) = v the associated conserved quantity. The special case v=-1 contained therein. The result is an differential equation A'(t)=v B'(t) + (v+1) B(t) without the source-term a(t). The Ansatz B=A^p (see above) or B=exp A is working well. Result is a nonlinear equation for A(t). If A(t) known, B(t) can be calculated (numerical). Interesting is, in my opinion, the existence of the conservation quantity. Which is the physical background of the problem?
@ Sergei: Sorry for the long silence --- I was on vacation. It seems to me that the substitution B=A^(v+2) you suggest above yields inconsistent equations for dA/dt. Am I missing something?
@Mark: I wish I could. But it does look very simple. A linear combination Z(t)=m*cos(wt)+n*sin(wt) does not work, unfortunately.
I will check substitution B=A^(v+2) though Frank independently found out that Ansatz B=A^p is working for this system (albeit in a bit different configuration).
@Mark: unfortunately, you are quite right. The direct Ansatz B=A^(v+2) in the original system is not working. I just checked it. It means that generalization of the particular solution B(t)=-A(t) (valid for v=-1) for arbitrary v is more complicated than I expected,
@Mark: perhaps just for fun. Your system has exact solution (at any v) for a non-periodic source with cos(wt)=exp(Gt) where G=(v+1)/v. Another possibility is a time-dependent amplitude of the periodic source, assuming that k=k(t) in the rhs of both equations. What you think?