Peter, there were no tensors when I took first year maths in the 1960s, there could have been, but there were too many other competing topics such as developing analysis from the axioms of a field, or the Frenet–Serret formulae describing a curve in space (and obviously plenty more).
James, I think you mean "neutron" rather than "neuron" although there could well be applications to animal nervous systems for all I know.
Shen (I presume your given name is the last), you need to distinguish "order" from "dimension". Once you choose a coordinate system in 3D space, vectors may be represented by lists of three numbers, and linear transformations may be represented by matrices (but the usual matrix multiplication requires cartesian coordinates). The general term "tensor" includes scalars (order 1) and vectors (order 2). This continues to any order. The dimension is the number of components needed to represent a vector, i.e. the dimension of the underlying space.
Originally the term "tensor" (from "tension") was due to the representation of stresses and strains in a solid medium. These were of order 2 (so represented by 3x3 matrices), but higher dimensions and orders were soon found to be useful for other purposes. They first came to prominance in general relativity.
A tensor of order n is a quantity that needs n subscripts, so it contains dn numerical quantities where d is the underlying dimension. A tensor product of a tensor of order 5 and a tensor of order 3 has 5+3=8 subscripts; it consists of all possible products of individual components. Ordinary matrix multiplication is the sum of AijBhk where we sum all the terms with h = j. To generalise this to arbitrary tensors, apply the contraction operator to the tensor product, i.e. sum over a pair of subscripts where one belongs to each factor in the product. So there are many such products, depending on which substripts are contracted. The trace of a matrix, by the way, is also a contraction as you should see for yourself.
(I don't know if there is a special name for all this when the "tensors" do not have the same numbers of rows, columns, layers and hyper-layers etc. Maybe they are not useful. In statistics it is usual to stack such structures into vectors and matrices.)
The above is still meaninful when vectors are defined just as lists of numbers. But when applied to an underlying space, a subscripted quantilty has to obey certain rules to be counted as a tensor. They must transform correctly under a change of coordinates. Otherwise they don't represent a physical quantity. If the coordinate system is not cartesian, we have to distinguish two types of tensor: covariant and contravariant. Things get more complicated and I doubt if they belong to a first year university course.
The short answer is that there is a generalisation of the kind you are thinking of, and it's called tensor algebra as Peter pointed out.
@Terry Moore: James, I think you mean "neutron" rather than "neuron" although there could well be applications to animal nervous systems for all I know.
Yes, many thanks for pointing this out. I should have written "neutron" instead of "neuron". As a followup to what @Terry Moore observed about tensors and to carry this interesting discussion a bit further, a good survey of low-rank tensor approximation is given in
L. Grasedyck, D. Kressner, C. Tobler, A literature survey of low-rank tensor approximation techniques, Report, Ecole Polytechnique Federale de Lausanne, MATHICSE 2013:
4D matrices come up all the time in computer graphics, see Quaternions. Extremely useful for 3 dimensional graphics (especially for rotations). The literature on these can be found over at least a few fields, computer graphics is just one area that uses them a lot.
Daniel, I assumed that the question meant a generalisation of matrices to objects specified by sets of numbers with more than 2 subscripts. The confusion is why I like to distinguish "dimension" from "order". Quarternions are 4D in the sense of having four components (but order 1). I agree with you that they have applications to computer graphics and other fields. I'm not sure of the advantages compared with matrices as linear transformations on a vector space (which doesn't mean I doubt that they do have). And they are interesting algebraically as they behave like ordinary numbers except in one respect: multiplication not being commutative (which also applies to matrices).
@Terry The reason why they are used all the time in 3d graphics is because we design graphics processors (and in more modern times, things such as GPUs or APUs) to be very good at multiplying matrices. There are indeed a data dependence caused by the lack of being commutative, but it's a far easier way to interpret computer graphics. If all you need is matrix multiplication, is makes the computations very elegant, and simple for a computer to do. Matrix multiplication is a core operation used in Computer Graphics in general (to do things such as scaling, and rotations), so to find an efficient means of handling many operations in one elegant structure makes it easier to build (less computationally expensive to construct the matrix), and allows for better performance. The other advantage is then you only need 3x3 or 4x4 matrices to do all your work for the fundamental operations.
Note: I was responding to the question, it wasn't a comment directed at anybody else but the original poster.
Thank you Daniel. I think I misinterpreted your allusion to quarternions. I wondered if there were applications that made quarternions simpler than matrices.
However, Fan Shen originally asked about 3D and 4D matrices (and higher D). You answered one possible interpretation of the question; other posters had a different interpretation, namely, in computer programming terms, something representatble by an array with more than 2 subscripts, e.g. array[1..6, 1..6, 1..6, 1..6] which a physicist might interpret as a tensor of order 4 in a 6D space. A further generalisation would be if the subscripts have different ranges; these are not tensors, but are clearly useful in computer programming.
Thank you all my senior and elder friend. I had read your answers and papers offered, they are really valuable for me. @James Thanks very much of your paper offered. @Terry What I want is the exact high-order matrix like 3x3x3, thank you for your pointing out. Also thank @Peter and @Daniel, I will read the tensor and quaternion carefully.
The concept of the determinant of a high-dimensional matrix is investigated in the book by Gelfand, Kapranov and Zelevinsky "Discriminants, Resultants and Multidimensional Determinants". It offers plenty of concrete examples of determinants of matrices in dimensions greater than 2.
I am curious about this question in the case of generalizing completely simple semigroups to semigroups with a higher dimensional sandwich matrix. I have certainly seen tensors in Diffusion Tensor Imaging obtained with certain special MRI studies. Thanks.