function z = condEntropy (x, y) % Compute conditional entropy z=H(x|y) of two discrete variables x and y. % Input: % x, y: two integer vector of the same length % Output: % z: conditional entropy z=H(x|y) % Written by Mo Chen ([email protected]). assert(numel(x) == numel(y)); n = numel(x); x = reshape(x,1,n); y = reshape(y,1,n); l = min(min(x),min(y)); x = x-l+1; y = y-l+1; k = max(max(x),max(y)); idx = 1:n; Mx = sparse(idx,x,1,n,k,n); My = sparse(idx,y,1,n,k,n); Pxy = nonzeros(Mx'*My/n); %joint distribution of x and y Hxy = -dot(Pxy,log2(Pxy)); Py = nonzeros(mean(My,1)); Hy = -dot(Py,log2(Py)); % conditional entropy H(x|y) z = Hxy-Hy; z = max(0,z);
I am sorry for repeating.
I found this code and i tried to implement by my data but not consent. for example
Mx = sparse(idx,x,1,n,k,n);
my data x
x = -0.000001942084498 -0.000130445985932 -0.000703266812341 -0.000053579514111 -0.000064058620601 -0.000175743747962
i want to understand sparse(idx,x,1,n,k,n).
what I know about sparse function sparse(i,j,v,n,k,n)is following:
the triplet pairs for the row subscript (i) , column subscript (j) and values (v)