i came across many applications using software agents but i observe that they are rule-based ones. I plan to work on developing applications on experience based software agents.
When you say that you are working on an experience based agent, what do you mean, do you mean it learns from experience, or that it uses its experiences to learn, the distinction might be subtle, but profound.
Most agents work within prescribed environments to keep the rule-base small. The more overhead the agent has, the more space it takes and the fewer of them will fit within a specific computer system.
For example consider the AKKA kernel, this kernel has a minimist agent architecture consisting mostly of message passing recievers that have the ability to also send messages only if it is programmed for them. Similar agents, are available within the Scala language, yet they are dependent on the JVM and the whole operating systems support.
The next consideration is the message interface, essentially to send messages to your agents you need some interpretive element (Really just a rule-base) that converts messages into action requirements. Things like Start, Stop, and pause come to mind. You start this from a supervisor that can stop and pause the individual agents depending on control messages from the console. For synchronization purposes you might want to also include a read, write, dump facility that gives you some control over the message buffers.
Now we need to deal with libido, what do we want the agent to be able to learn to do? Do we want it to evolve its own program, to move randomly in some virtual environment, and eventually learn to do things that are "Good" more often then "bad", and if so, how will we inform it that something is "good" or "bad". If you think about it, an agent that programs itself but has no evaluative function, isn't much good, and an agent that does stuff randomly but can be conditioned to not do bad things, isn't much good all by itself, but if the two are combined, then programming itself to get a "Good" stimulus and avoid "Bad" stimuli, will allow you to select according to some "Test" evaluation based on what you want the agent to be able to do. An example of this might be found in the Early Darwin Robots of Dr. Edelman's NSI.
To be able to extract meaning from its environment however is another thing entirely. An Implicit memory system, might help an agent to "Learn" from experience the "Semantics" of its environment, where the G.A. mentioned above learns to navigate but has no semantic learning. Can you see the difference?