I'm not aware of any (there may well be lots that I just don't know about), but the paper describes their algorithm so it probably would be easy to reproduce it with some basic familiarity with R or Matlab. Indeed I use pretty much this same algorithm in my PCL (Presentation) code for pseudorandomized experiments that I run, so it's pretty doable.
Yep. I have an array (of items, conditions, whatever) and then shuffle that array; if necessary, I also pseudorandomize it such that there aren't too many of one condition next to each other.
I don't have it very clearly organized and commented for general purpose use, it's just stuff for my own specific experiment, so I'm a little wary of posting it online (my fear is that people who aren't good with PCL coding would find it and think it's something they can just use out of the box, and might end up having a lot of problems--whether it be errors keeping them from being able to run the experiment at all, or being able to run the experiment but not realizing that it was presenting the wrong stimuli at the wrong time or something like that). But if you'd like it for your own use then drop me a message and I can share it with you (with the disclaimer that it's just an example and could need some substantial modification to work for your own needs).
Thanks for your offer. That'd be great. What I'm really looking for is a flexible tool that allows me to randomize 'whatever' data I have, preferably in R and not just pseudorandomized lists. It 'd be also great if interdependencies could be taken into account. Like: filler-condA implies X whereas target-condA implies Y; X and Y must not occur more than three times in a row while filler and targets must not be occur more than twice in a row and conditions(A, B) always alternate or the like...