Some recent TUIs such as controllers, handles, and smartphone based modifications can be 3D printed by the common end user. Although the 3D modeling and printing process sometimes takes a bit of time to get used to, pre-fab models are also available, which are becoming easier to create in your own home.
Students have already started making custom controllers, which can also easily be made in the home with a small 3D printer:
Another specific example, though a bit simplistic, would be the interface for smartphones which turns your phone into an augmented reality magnifying glass. It's a 3D printed handle (I can't find the picture, my apologies) in which your phone sits and can be held more comfortably.
I have only found dTouch and Trackmate with other projects using such technologies. Nevertheless, it seems that there are not many computer vision TUIs that follow such approach.
Well, not really. Since the Makey Makey acts just as a peripheral like the LittleBits, Arduino or rPi. Moreover the MakeyMakey is something you have to buy, even though is cheap, it might not be considered standard.
I was wondering of an already finished TUI system that works pretty much 'out of the box' but that can be 'downloadable' / distributable on the Web. That is why it seems (to me) that only dTouch and Trackmate have been able to achieve this.
Underlying your question is an assumption of what a 'finished TUI is'. What do you mean with that?
Finished TUI systems as I understand them are mostly applications, more general systems are usually platforms like the ones Antonio mentioned. (Little Bits is another, even though I find it doesn't give you a lot of potential in practice. Microsoft has Gadgeteer).
Dtouch reminds me of Reactivision created by the people that made Reactable. (I saw Martin Kaltenbrunner was involved in advising DTouch).
Implicit seems your assumptions that students (you're talking about pedagogical applications) already have access to all the hardware they need. But a webcam can only so so much. What about other types of sensors other than the webcam or the microphone or the keyboard? What about other types of effectors other than the screen of your laptop or the audio output?
Would be interesting to think about using all the hardware in the phone (especially the accellerometer), many design students use their phones in prototypes, and all your students would have such a device I guess. The only thing you really would want to enrich the possibilities is a servo motor or something else that can make things move. And something that can detect at a distance - but perhaps Ibeacon in the phone can help with that.
Yes, exactly. That is what I am looking for. TUI based applications ready to work on the Wild. Ideally connecting any device through Web Sockets perhaps. So, there is a lot of work on using sensors and devices but most of them are lab/location based, there is still this gap that will allow them to be released and distributed easily.
Moreover, just to add, TUIO/Reactivision was based on dTouch. So the protocol using computer vision is still the main approach (plus touch now). So in a way TUIO protocol is also another successful system that can be distributed on the Wild.
I don't get your earlier response "Since the Makey Makey acts just as a peripheral like the LittleBits, Arduino or rPi. "
What do you mean with 'just a peripheral'? (For one thing rPi is a fullblown computer isn't it?) But if I put an Arduinoboard in a physical object and use sensors and actuators to make the product (digitally) interactive - then I am creating tangible interaction, there is nothing peripheral about it?
Though I would agree with saying that Arduino is for most end-users still too complicated. That is why MakeyMakey is actually quite nice.
My own interest in this is more on the various forms of system feedback, rather than on how to detect input. Input (from the user to the system) we are quite good in making tangible (in a way it has always been tangible because at some point it is the user with his body in the environment taking some action that then has to be recognized by the computer, e.g. on a keyboard, a mouse, touchpad, etc). We can make the input more 'tangible' by letting manipulations of common objects and tools be mapped onto system inputs instead of the abstract keyboard that has no intrinsic meaning in terms of its physical form. So far the regular story of TUI. But the feedback is often still quite 'intangible', (equally so in most academic projects) - it is hard to really get beyond the screen. MakeyMakey let's you play a game with PlayDough buttons, but the visual feedback is on a screen and this means "the game action" is 'in the screen' and not 'in the world where our body is'.
This is why it would really be a step forward if we could get a range of non-screen forms of system feedback to be cheap and easily implementable by end-users 'in the Wild'. Any ideas?
I completely agree that rPi is a computer in itself and I am not diminishing the tangible extendability that micro-controllers embedded with sensors and actuators can add to a system. Nevertheless, it seems to me that projects such as the MakeyMakey are just re-wiring the keyboard to other physical objects. I was thinking of it as a peripheral since the software does not leave the screen (as you mentioned). So in a way, sensors are another type of peripheral. Unless they react as part of the interaction system as in the Internet of Things. But it is still a very powerful tangible interaction approach.
That is why, it seems to me that only few projects manage to leave the screen.
I'm gonna quote you here: "Is it really about tangibility or is it more the embodiment that matters?". Although difficult to respond, even if we manage to finally solve the question, there is still the Social Construction of Technology aspect to consider. If someone manages to build the 'perfect' system, people will still need to implement it. So distributability might be another important factor to consider if its meant to be a useful tool.
Personally, I believe one of the main approaches is to distribute TUIs would be through computer vision approaches, ideally enhanced with micro-controllers (sensors/actuators.). Especially if we consider far away communities in development countries (which is my main interest) since most of them should have or at least have access to the basic hardware to implement them.
Good points Javier. What I like about your approach is that instead of researching the value of tangibility purely as derived from academic projects you are working on broadening up the scope of contexts and situations in which tangible interaction can take place (cheaper, easier access for many many people) such that we will learn (probably) many new things about what tangible interaction can do that we do not derive from the lab projects. For me that is not just about making sure tangible interaction 'gets adopted in practice', but it also means you can find out about what it means to interact with artefacts "in the wild" as you say.
At the same time if you want it cheap, modular, easy-to-use and so on, which is what you want if you want broad distribution, you will be limited by what current technology provides and that is in part constrained by underlying ideas on human-computer interaction 'from the old days' (graphical interfaces etc).
It's a trade-off I guess. I guess we should push both directions, and integrate the findings.
But perhaps there are technologies or artefacts that provide a win-win situation: cheap, and also immediately taking you out of the box of the conventional design frames. I know of a guy who does research on full-body interaction (Vangelis Lympidouris) and he uses a toy that is sold as a kind of 'Golf game', (Gametrak or something?) it costs only 10 dollars, but with it he is able to track people's arm movements and use that in all kinds of research probes for projects. He also uses a 10.000 dollar motion tracking system provided by a tech-company. I like how he combines both.
On the social factor: I think it is crucial. It is actually part of what I call 'embodiment' which has to do not only with the physical body in a physical environment but just as well with a user being 'situated' in a social context (and acting with artifacts against that background)