as you are dealing with sign language, i.e. image data, I wonder if we are not dealing with two challenges here:
Feeding images, "visual input" into the software
Training lamtram (or any other neural network) to recognize these images as a complex, pattern-based communication system
Afaik, lamtram does not accept images to this date, so maybe you should rather look into image classification frameworks.
P.S.: Just as I am writing this, I come to realize how this is making your project more complex (and more innovative, in a positive line), just as it is mire complex to analyze a Youtube video as opposed to a classic newspaper article.
Hope there are some experts who have already trained a computer to accommodate sign language.