I am just wondering how to express a sensor using matlab code. Please help me out. The sensor can be either sonar or vision sensor or laser will also work well.
Your question is a little bit confusing, what exactly do you mean with 'how to express a sensor using matlab code'?
In a standard scenario a sensor will (more or less) constantly provide input data gathered from the surroundings. Using some kind of interface / API you can retrieve this information, process it, and afterwards send commands to one or more actors in your system to create a desired action (e.g. avoid an obstacle).
What exactly is your question? How to retrieve the sensor information in Matlab? How to process the information and create a response? How to develop obstacle detection based on the sensor data?
If you want to detect the obstacle through ultrasonic sensor and you are using arduino microcontroller then it is not possble to me to interface ultrasonic with matlab via arduino this all is my practical observation because this is same as my "Smart Traffic Signal Using Utrasonic Sensor" Research Paper i will suggest to refer my research paper and my Thesis to clear the concept about MATLAB and Arduino interfacing.
Thesis Smart Traffic Signal
Conference Paper Smart traffic signal using ultrasonic sensor
dear all,i am sorry mayb my question was not well descriptive.
frst of all in my project i ve no hardware,
i will be making a virtual robot with a obstacle detection algorithm..so all will be in cide,like i want to know how to express a sensor in mathematical equation in matlab or any prgramming..hope u understnd my question.
So you do not have to consider a concrete physical system to work with? But you rather want to develop a basic (maybe later advanced) (maybe domain-specific) model of 'sensor - actor interaction'?
If this is the case, the publications provided by Rahul Naayan Dhole seem to be helpful to get a starting point, even if they are pretty specialized for one scenario ...
For a more generalized consideration I would suggest these two:
1) A very 'generalized point of view' :
(Google books is not the best source I know - but maybe you are able to get it some other way?)
2) And a more specific example (out of many) on how to do it:
http://adb.sagepub.com/content/2/3/277.short
If you are trying to develop a basic-formalized model, I would suggest to keep it as simple as possible. There are some 'monitoring entities' (sensors) providing input data (more or less constant) of any kind (speed, distance, temperature, gas density, ...) about the environmental conditions. Aditionallay, there are some 'processing entities' to make real-time decisions based on the provided sensor-data and some learned rules (model). And finally, there are some 'entities to react' based on the decisions made.
Basically: 'Input - Processing - Output'
If you want to model this in a basic way, you might start very simple:
A 'generalized sensor' can be comprehended as some kind of function: sf(t) -> v, which provides a certain measurement valued v (with a defined maximum / minimum / range, ...) regarding a given point in time t.
Generally, this output is afterwards used as input for a very simple generic decision function df(v,th) -> true / false, given a certain sensor measurement value v and a defined threshold th the function provides a basic decision (true / false).
E.g. 'If the actual distance to any object recognized by the sensors, does not exceed a defined threshold the output is true, otherwise false'.
This output of the decision function is now used to generate an appropriate response and send commandos to the 'reactors' of the system: 'If the distance to objects is not critical: keep going on full speed and global navigation, else: slow down and switch to local navigation'.
I know this is a very basic example - still I hope that it helps you a bit ...
You can also create an image having black block of different shapes illustrating obstacles. Write your code by considering your prepared/proposed algorithm in order to avoid presented obstacles in image. At initial stage, you can use circle as a robot to navigate from source to destination on the image.
what exactly do you mean with "detect obstacles without sensor"? That sounds a bit odd. Without any ability to "sense" obstacles, I have no idea how to detect them ... (Even crashing into something - not beeing able to move on, means some kind of sensing?)
Or do you aim for a navigation that is based on a specified starting point, odometry, and a given map including the obstacles?
I assume that "odometry navigation" might be a good keyword to start your search.
In very short words: given a specified starting point, keep track of your own movement actions (turning angles, speed, time and sequence of single steps) and write it into a journal. Based on this, you can calculate (estimate) you current position in relation to the starting point.
Also, the starting point is your reference point to align your 'local navigation' (the odometry journal) to the 'global navigation' (the given map) and estimate your location on the map.
Good luck ... and be careful, the more you move, the more imprecise your local navigation will become and you have no second chance to align global and local again after the starting point ...