With the emergence of tensorflow.js, training directly in the browser becomes possible. I think the browser's cross-device and easy deployment characteristics are very suitable for machine learning application development and use, and it is likely to become a mainstream federated learning platform. What are the disadvantages/challenges of machine learning in the browser?
Take a look at V8 presentation (though dated I think it is still relevant) at:
https://www.youtube.com/watch?v=UJPdhx5zTaw
Thanks a lot for gracious participation dear Sushma Tayal and Deepanshu Kumar
When you need scalability for datasets that can take hours a 2X delay means a couple of hours more. 2X also means that it wastes more energy also for the same execution.
Also, note that the comparison of the port on the video is based on porting the same JS to C++ not optimized C++ to JS which I think presents some extra challenges in performance optimization.
Thanks a lot for gracious participation dear Deepanshu Kumar
Yes, the V8 is really relevant thing, so far my experience was the slow speed with in-browser ML
The biggest challenge is to accept it psychologically. Human memory is processed in various ways. It is said that information that appeals to more than one sense organ is more permanent
Thanks a lot for gracious participation respected Sunny Kumar
Thanks a lot for gracious participation respected Sushma Tayal
Literally, The same will have various challenges for machine learning that mimics the human mind. We'll see when the experiences build the data.
A few years ago, I was a Research Scientist at IBM's TJ Watson Research Lab at Yorktown. I spent a lot of my time there prototyping natural language and natural gesture interaction interfaces. More recently, I tried to spend some time contributing to the community a bit in machine learning and in JavaScript.
Thanks a lot for gracious participation respected Dr shivani
Thanx a lot for gracious participation respected Dr keshav Bhambri
Thanks a lot for gracious participation respected Dr Mukesh Kumar
This is a topic that I'm really excited about, taking something I really love, machine learning, and also blending that with my love for front-end and JavaScript development.
Actually, Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.
Why machine learning in the browser? Before we jump into that, let's review some really simple terminology. The first speaker covered a bit of this, but I just wanted to make sure that for those who weren't there at that talk, I just want to cover the basics. When we talk about artificial intelligence, it's a super broad field. It's really old, dates as far back as the 1950s. You can think of it as just efforts directed towards making machines intelligent, and there are many ways you can do that. A sub-aspect of that is machine learning, which is algorithms that enable machines independently learn from data. This idea is pretty powerful because there are a lot of problem spaces that as software engineers it's almost impossible to write concrete rules for.
Thanks a lot for gracious participation respected Dr Rajneesh
Thanks a lot for gracious participation respected Dr Ranveer
If you take the classic example of how do we differentiate a cat from a dog, you want to do things like, how many legs does it have? It has four legs. It's either a cat or a dog. If it has pointy ears, it's a cat. That can really get complex if you have things like multiple orientations, occlusions, and all that sort of things. You want an algorithm that, rather than writing rules, you showed a bunch of examples and it independently learns to write patterns. Examples of these algorithms are things like decision trees, random forests, reinforcement learning, and neural networks, which brings us to the interesting part, deep learning. Deep learning is just a special case of machine learning. Here the main algorithm we use are neural networks.
Take a look at V8 presentation (though dated I think it is still relevant) at:
https://www.youtube.com/watch?v=UJPdhx5zTaw
When you need scalability for datasets that can take hours a 2X delay means a couple of hours more. 2X also means that it wastes more energy also for the same execution.
Also, note that the comparison of the port on the video is based on porting the same JS to C++ not optimized C++ to JS which I think presents some extra challenges in performance optimization.
Yes, the V8 is really relevant thing, so far my experience was the slow speed with in-browser ML
The biggest challenge is to accept it psychologically. Human memory is processed in various ways. It is said that information that appeals to more than one sense organ is more permanent.
In this context, it may be difficult for some to learn without sharing the same real environment.
The same will have various challenges for machine learning that mimics the human mind. We'll see when the experiences build the data
Thanks a lot for gracious participation respected Dr Sushma Tayal
Thanks a lot for gracious participation respected Dr Deepanshu
Thanks a lot for gracious participation respected Dr Rajneesh
Thanks a lot for gracious participation respected Dr Sushma Tayal
The next thing we're going to talk about really fast is the high-level Layers API. Two things – it's the recommended APIs for building neural networks and it's very similar in spirit to the Keras library. For those who haven't used Keras, Keras is this really well-designed API for composing and training neural networks
Yeah dear, It gives a really nice way to think about the components in your network, and a really good way to express these components in the actual code.
To illustrate the API, I'd like us to walk through the process of building something called an autoencoder. An autoencoder is a neural network that has two parts. Typically, it's being used for dimensionality reduction. An use case is, imagine that we have our input and it's 15 variables. We want to compress that into just two variables that represent all of the inputs, 15 variables.
There are two parts. The first part of the neural network is called an encoder. It takes the high dimensions 15. Its goal is to output two values, which is called the bottleneck. The other requirement here is that it should learn a meaningful compression such that we can also extract or compute the original inputs just from that bottleneck representation. That's exactly what the decoder part of the neural network does, it takes this bottleneck that has been generated by the encoder, and it learns to reconstruct the original inputs from that bottleneck
One interesting thing is that this whole model, this autoencoder, has been applied for the task of anomaly detection. The goal here is that if we have some normal data, we can learn this mapping from inputs to bottlenecks, small dimension, and then from small dimension to output.
Thanx a lot for gracious participation respected Dr Mohish Lawrance
But, If we train this on a normal data set, that means every time we feed in some new data, and we perform a decoding, we get an output, then we should get about the same thing. How is this relevant to anomaly detection? If we train in normal data, we get an output that's similar to the input. Whenever we have an anomaly, a data set that this model has never seen, if it goes into the network and we try to reconstruct the output, we'll get something that's really different.
Right dear , That's called high reconstruction error. Depending on the size of this error of reconstruction, we can then flag it off as anomalies or not.
The idea is, if we have normal data, we have, let's say, an error of 0.8. If we have abnormal data, we have an error of 0.2 or vice versa. We can set some kind of threshold, where we'll say, when the reconstruction error is beyond this level, then we flag this data as an anomaly. How do we express all of this in JavaScript?
If you recall, we have an input layer, which is 15 units. We have a hidden layer. We have an input, we pass it to a dense layer that's 15 units. We pass that through another dense layer that's seven units, we have a bottleneck layer, which is two units, and then that composes our encoder and just other parts are the decoder.
Actually, When it comes to deploying machine learning applications to end-users, it can still be a really complex process. If you do this in the browser, this simplifies that whole workflow. There are no installs, no drivers, no dependency issues. It's as simple as going to a URL, and open a web page, and everything just works. In addition, if you will deploy your model hosted on NPM, you can actually have all the benefits of model hosting, versioning, and distribution that comes with distributing libraries through NPM.
The next has to do with latency. These days models can be optimized to run really fast, and they can run really fast in mobile and in the browser. In some use cases, it's actually faster to run your model in the browser, rather than send round trip request to a server and then render that request back to the user.
In addition for resource constrained regions like Africa and East Asia, in these regions, you really cannot rely on the internet connectivity there. It's a much better user experience to download all of the model locally to the device and then offer a smooth user experience that doesn't depend on constant internet connection.
Then, finally, the browser is designed for interactive experiences and machine learning can supercharge that. I'll show a few examples in a moment. With TensorFlow.js, you can build models on the fly. You can use rich user data available in the browser camera. The camera sensor is possible. You can retrain existing models and you can also enable really dynamic behavior. There are applications for these in ML education, retail, advertising, arts, entertainment, and gaming.
Actually, Before I proceed, I want to give a few concrete business use cases of how some industries are already using TensorFlow.js. An example is Airbnb, something related to privacy-preserving sensitive content detection. As part of the user onboarding process, they will ask the user to upload an image.
We have observed that, in some cases, users might upload images that contain sensitive content, like their driver's license and other sort of images. What they've done here is that they've put in a TensorFlow.js model. That ride-in browser will tell the user, "Your image contains sensitive content. We haven't seen it, but we can offer the service of telling that you likely have sensitive content, and you probably should use another photograph.
Thanks a lot for gracious participation respected Dr Sushma Tayal
Thanks a lot for gracious participation respected Dr Akshay Kumar
Thanks a lot for gracious participation respected Dr Deepanshu
Actually, Before I proceed, I want to give a few concrete business use cases of how some industries are already using TensorFlow.js. An example is Airbnb, something related to privacy-preserving sensitive content detection. As part of the user onboarding process, they will ask the user to upload an image.
The ride-in browser will tell the user, "Your image contains sensitive content. We haven't seen it, but we can offer the service of telling that you likely have sensitive content, and you probably should use another photograph.
Artificial Intelligence (AI) is one of the most promising technologies for growth today. According to recent data released by the consulting firm Gartner organizations that have implemented AI grew from 4 to 14% between 2018 and 2019.
AI is a key technology in Industry 4.0 because of all the advantages it brings to companies and all those who want to start a digital transformation process would have to adopt it in their processes.
The concept of Artificial Intelligence has been around for a long time. In fact, John McCarthy created the term Artificial Intelligence in 1950 and Alan Turing already started talking about this reality that same year in an article entitled “Computing Machinery and Intelligence”.
Since then this discipline of computer science has evolved a lot.
For Massachusetts Institute of Technology professor Patrick H. Winston, IA are “constraint enabled algorithms exposed by representations that support looping models that link thought, perception and action
Other authors, such as DataRobot CEO Jeremy Achin, define artificial intelligence as a computer system that is used for machines to perform work that requires human intelligence.
For the head of Tech Target’s technological encyclopedia, Margaret Rose, it is a system that simulates different human processes such as learning, reasoning and self-correction.
As we can see, the three definitions of AI refer to machines or computer systems that think. They emit reasoning emulating human intelligence to perform tasks that only people can do.
However, other sources go further and define AI as a computer system used to solve complex problems that are beyond the capacity of the human brain.
The president of the Future Life Institute, Max Tegmark, shoots in this direction and states that “since everything we like about our civilization is a product of our intelligence, amplifying our human intelligence with artificial intelligence has the potential to help civilization flourish like never before”.