What are the advantages of using the Singular Value Decomposition++ (SVD++) algorithm except for much faster algorithm and what are the disadvantages of using SVD++?
Disclaimer: The following concerns SVD and not SVD++.
Can you be more specific?
Advantages/disadvantages with respect to what?
What's your use for it?
What do you mean by faster algorithm?
What you should know is that, having a matrix A, there are various ways you can "decompose" it (into a product of matrices)---we talk about "matrix decomposition" or "matrix factorization"---and SVD is one of them. You can think of your matrix A as a transformation; one that takes a point in space from one location to another. As well, a product of matrices, say T*R, can be seen as a sequence of two transformations; where you first apply transformation R then transformation T (all this is beautifully explained in: https://www.youtube.com/watch?v=kYB8IZa5AuE&index=4&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab). We usually decompose a matrix to better understand what it's doing. For example, A = T*R means that transformation (matrix if you want) A is decomposed as a sequence of two transformations R followed by T. If R represents a rotation and T a translation, then, by decomposing A into T*R you know that A corresponds to a rotation followed by a translation (of some point in space). This insight on what basic transformations your matrix is performing is useful in many applications.
SVD is the generalization of the eigenvalue decomposition (EVD), used when the matrix you want to "decompose" is square, to a non-square matrix. SVD decomposes a matrix A into a set of three matrices (three transformations) U, D, and V such that A = UDVT, where superscript T denotes matrix transpose, D is a diagonal matrix, and U and V are orthogonal matrices. The columns of U and V are called left-singular vectors and right-singular vectors of A, respectively (https://en.wikipedia.org/wiki/Singular-value_decomposition). A diagonal matrix can be seen as a transformation that scales the axis of your space whereas orthogonal matrices basically apply a rotation to your points (one that preserves angles and lengths of the vectors representing the points). This way of looking at a matrix is useful in many applications such as calculating (pseudo-) inverse of matrices, approximating a matrix, and determining the rank of a matrix (since rank(A)=rank(D)), etc. (https://en.wikipedia.org/wiki/Singular-value_decomposition).
Some notes:
As an application, SVD is used to perform principle component analysis (PCA) that aims to decompose a matrix (usually a set of observations) in order to find the directions (called principal directions or principal axes) in which the observations have the largest variance (see https://en.wikipedia.org/wiki/Principal_component_analysis); basically finding the directions in which your data are distributed, which is useful for dimensionality reduction.
SVD is not an algorithm, it's a matrix decomposition. Hence, you cannot talk about its speed and you cannot qualify it as fast or not. However, there are algorithms that perform SVD such as the Golub-Kahan-Reisch or the Jacobi algorithms and others, which you can compare with respect to their speed in performing SVD.
Disclaimer: The following concerns SVD and not SVD++.
Can you be more specific?
Advantages/disadvantages with respect to what?
What's your use for it?
What do you mean by faster algorithm?
What you should know is that, having a matrix A, there are various ways you can "decompose" it (into a product of matrices)---we talk about "matrix decomposition" or "matrix factorization"---and SVD is one of them. You can think of your matrix A as a transformation; one that takes a point in space from one location to another. As well, a product of matrices, say T*R, can be seen as a sequence of two transformations; where you first apply transformation R then transformation T (all this is beautifully explained in: https://www.youtube.com/watch?v=kYB8IZa5AuE&index=4&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab). We usually decompose a matrix to better understand what it's doing. For example, A = T*R means that transformation (matrix if you want) A is decomposed as a sequence of two transformations R followed by T. If R represents a rotation and T a translation, then, by decomposing A into T*R you know that A corresponds to a rotation followed by a translation (of some point in space). This insight on what basic transformations your matrix is performing is useful in many applications.
SVD is the generalization of the eigenvalue decomposition (EVD), used when the matrix you want to "decompose" is square, to a non-square matrix. SVD decomposes a matrix A into a set of three matrices (three transformations) U, D, and V such that A = UDVT, where superscript T denotes matrix transpose, D is a diagonal matrix, and U and V are orthogonal matrices. The columns of U and V are called left-singular vectors and right-singular vectors of A, respectively (https://en.wikipedia.org/wiki/Singular-value_decomposition). A diagonal matrix can be seen as a transformation that scales the axis of your space whereas orthogonal matrices basically apply a rotation to your points (one that preserves angles and lengths of the vectors representing the points). This way of looking at a matrix is useful in many applications such as calculating (pseudo-) inverse of matrices, approximating a matrix, and determining the rank of a matrix (since rank(A)=rank(D)), etc. (https://en.wikipedia.org/wiki/Singular-value_decomposition).
Some notes:
As an application, SVD is used to perform principle component analysis (PCA) that aims to decompose a matrix (usually a set of observations) in order to find the directions (called principal directions or principal axes) in which the observations have the largest variance (see https://en.wikipedia.org/wiki/Principal_component_analysis); basically finding the directions in which your data are distributed, which is useful for dimensionality reduction.
SVD is not an algorithm, it's a matrix decomposition. Hence, you cannot talk about its speed and you cannot qualify it as fast or not. However, there are algorithms that perform SVD such as the Golub-Kahan-Reisch or the Jacobi algorithms and others, which you can compare with respect to their speed in performing SVD.
Computing SVD is slow and computationally expensive. SVDs require care dealing with missing data. Gradient approach is much faster and deals well with missing data. Gradient descent is convex optimization method. The principal idea behind this is how far off we are at one point (prediction) tells how improve the prediction. SVD is used to perform PCA that aims to decompose a matrix (usually a set of observations) in order to find the directions (principal axes) in which the observations have the largest variance
Image transforms such as DWT not represent image in optimum form. SVD provides optimal representation of image by packing most of the information in few coefficients for a given image.