Does it mean that both scales measure the same construct or does it mean that variance of one scale explains the variance of the other scale in 100 per cent? What are the implication of this finding? This is for a large sample (N=955)
sometimes you might have a non logical results, so you might check if you have any mistakes as Theo mentioned, but some times respondents might be not serious in their responses so you have these results, I suggest to re- administer the both measurements in a small group and you might have an approximate vision which perhaps help you to reject or confirm.
I would echo earlier comments about checking for errors. It can also arise by chance fairly easily in small samples with dichotomous or discrete scales (where only a small number of values can be observed).
In a large sample such as this is could arise by chance if the true correlation were high and the scales dichotomous or discrete - particularly if you have missing data.
Lastly I would check for structural or other constraints on either scale that might mean that a high or perfect correlation is possible. For example, if both scales are subject to ceiling or floor effects they could be perfectly correlated. Thus "Do you like pizza" or "Are kittens cute" : 0 no and 1 yes - are likely to be perfectly correlated in many samples
I agree with the colleagues that said you results are extremely unlikely. In any case I think you should specify the type of correlation you're using (Pearson, Cosine, Quasi, Spearman, etc., although it seems everyone is assuming Pearson).
I would also advise you to put here the scatterplot (biplot) of your data since it's easier to confirm if your plot is actually a line (meaning you can directly transform one variable into the other without error using a linear equation with a non-zero slope) or if it has some measure of dispersion (and thus existing an error in method).
I'm sending an image of a small test with four sets, each measured with four different types of correlation that can be seen in the following report:
EXAMPLE 1 - GREEN:
PEARSON: 0.0
COSINE : 0.878310065654
QUASI : 0.837209302326
SPEARMAN : 0.0
EXAMPLE 2 - RED:
PEARSON: 0.956182887468
COSINE : 0.990536064688
QUASI : 0.840336134454
SPEARMAN : 0.956182887468
EXAMPLE 3 - BLUE:
PEARSON: 0.454545454545
COSINE : 0.874639285677
QUASI : 0.806451612903
SPEARMAN : 0.454545454545
EXAMPLE 4 - PURPLE:
PEARSON: 1.0
COSINE : 1.0
QUASI : 1.0
SPEARMAN: 1.0
There's a lot of ways of numerically measuring the relation between two (or more) variables) since the notion of relation itself is extremely complex. For clarity purposes I'll also leave the code that generated those numbers (let's hope I made no mistake :p). Python and numpy (image produced with matplotlib).