For every concrete operator it is a concrete (and often a very difficult problem). In order to illustrate this consider the case of operator T in an n-dimensional Hilbert space H. First of all, since ||T||2 = ||T*T|| the problem reduces to the computation of the norm of the self-adjoint operator A =T*T. For a self-adjoint operator in a finite-dimensional space the norm equals to the maximum of the moduli of its eigenvalues. The computation of eigenvalues reduces to the polynomial equation det(A - t I) = 0. This equation has degree n, so for n > 4 there is no general formula for its solution.
The assumption made here is that the Hilbert space H is separable and complex. See the proof of the first theorem, starting on page 416. In fact, all of the details about normal operators appear in the proofs.
Another good place to look is in
Z. Tarcsay, Characterizations, extensions and factorizations of Hilbert space operators, Ph.D. thesis, Eoetvoes Lorand University, Hungary, 2013:
Although the norm is not part of the discussion in this thesis, characterization of operators given here is both helpful and interesting.
More to the point, see
H.H. Bauschke, J.V. Burke, F.R. Deutsch, H.S. Hundal, J.D. Vanderwerff, A new proximal point interation that converges weakly but not in norm, Proc. Amer. Math. Soc, 1997:
https://people.ok.ubc.ca/bauschke/Research/31.pdf
Another very good place to check is
M. Wang, Constructive analysis of partial differential equations, Ph.D. thesis, University of Waikato, 1997:
For every concrete operator it is a concrete (and often a very difficult problem). In order to illustrate this consider the case of operator T in an n-dimensional Hilbert space H. First of all, since ||T||2 = ||T*T|| the problem reduces to the computation of the norm of the self-adjoint operator A =T*T. For a self-adjoint operator in a finite-dimensional space the norm equals to the maximum of the moduli of its eigenvalues. The computation of eigenvalues reduces to the polynomial equation det(A - t I) = 0. This equation has degree n, so for n > 4 there is no general formula for its solution.
I appreciate the answer given by Vladimir Kadets but it is true if it is a linear operator. If your A is linear then you should not compute its Characteristic equation to find norm, rather you apply Power's method to find the largest in magnitude eigenvalue to get the norm.
If A is a concrete operator, for example, integral with a kernel K(t,s), the upper estimate one can get by Holder inequalities, and lower estimate one can get by concrete element x such that ||Ax||=||A||||x||. Often x(t)=1, t\in [-a,b](in a compact case).
Many examples there are in my teaching aid:
B.Osilenker, Problems and Exercises in Functional Analysis (in Russian),
Let ${e_k}$-orthonormal basis in the Hilbert space and $\{\lambda}_k$-bounded sequences. The diagonal operator (this is a principal operator-see, for example, the Hilbert-Schmidt theorem)
It has already been mentioned that the problem can be reduced to the case of computing the norm of a (positive) self-adjoint operator, say A. This is true in arbitrary (infinite dimensional Hilbert spaces H) too. For such operators, the norm equals their spectral radius r(A) (the radius of the smallest circle (or interval) containing the spectrum). But for r =r(A) there is a formula: r(A)=lim [( ||A**n||)** (1/n)], which holds for arbitrary operators in B(H). In the general case, applying this formula in order to compute r(A) is a difficult task. Here is an example when this method works (the example might be interesting in itself). Consider the infinite matrix A=(a^ij), 1
I continue with the real finite dimensional case. For a positive self-adjoint operator A, its norm is given by: ||A||=sup, the supremum being taken over all h in R**n, with ||h|| =1. Thus we have to find the maximum of a quadratic form, with the restriction given by the boundary of the unit sphere. This is a smooth constrained extremum problem, which can be solved by means of the method of a Lagrange multiplier.
The norm of a positive self-adjoint operator on a finite dimensional space equals the greatest value of its eigenvalues. This value can be found (approximated) by means of the "power method", described in several books and files free available on the Internet.
A very nice Example that you want, can be found in the book
Paul R. Halmos, A Hilbert space problem book, Second Edition, Springer-Verlag, New York Heidelberg Berlin 1982, p. 99-100 "188. Volterra integration operator."
You can find there a problem and its nice solution.