The "best software" is a very subjective request and will very much be determined by a number of factors. Factors such as what outcome are you seeking to achieve; what specification of resources do you have available; what kind of performance are you expecting; what kind of storage do you need; what kind of applications do you want to run; what kind of budget do you have; do you want a do it yourself solution, or is your preference for a fully supported commercial package; what level of expertise do you have in-house?
If you can provide me with the answers to these questions, I can give you a good idea of what you should be looking for.
I want to do it by myself, I have a theoretical expertise in cloud and a practical expertise in network, I have minimum resource as a 2 GB RAM and corei2 computers, I want to do experiment to collect data such as availability, response time and bandwidth, also I need to have a practical expertise in cloud computing.
That certainly helps a lot, thanks. The first thing I would say is that the computers you plan to use are a bit on the lightweight side to run a decent cloud system. To simulate a full cloud environment, which you can do easily with Xeon based hardware, you would ideally want a minimum of 16GB of RAM, although you can do it with 8GB. The problem is that a proper cloud environment will need up to 4GB just to run the environment, and you will need extra RAM to share with the user nodes to allow them a reasonable amount of RAM to run an operating system and some programmes to test. Depending on the processor types you already have, you may not have enough threads to play with. Now you could get around this to a certain extent by making a cluster with your existing machines. If you have 4 machines, that will give you four processors times the number of cores and threads you have and 8GB of RAM, which is still not a great deal, as you will need an extra layer to cover the cluster operating system, plus the cloud system on top.
A cloud system would need to run a management console, an object storage gateway, a cluster controller, a storage controller and a node controller at the least, so you would not have very much left to play with. A better idea would be to source a Xeon quad core machine - an early ex-server would be ideal with a quad core and 8GB or more of RAM. You can usually get these very cheaply when large companies upgrade their systems. It does not need to be up to date, something over 5 years old will do nicely, preferably with a virtualize-able processor. Some of these machines have 2 processors, which could give you a minimum of 8 cores and 16 threads. This would allow you to run a real world cloud system like Eucalyptus. Before HP took the company over, the Eucalyptus was developed as a full cloud setup, which you could either cluster to spread the workload, or you could also run a "cloud-in-a-box" on a single machine if it had enough resources. It was also fully compatible with Amazon's AWS services, and still is. There is a community edition available for free, although you have to figure a lot of stuff out for yourself. Your network experience will come in handy for that.
You could also use OpenStack, OpenNebula or Apache CloudStack to name but three of the many open source software cloud environment options on offer to provide a full cloud offering. The key to a decent performing cloud system is the quality of the hardware, so I would certainly recommend getting a hold of a Xeon based server if you want to go down that route. It will be a great deal of work, but you will learn a great deal in the process. If you think getting the necessary hardware will be a problem, then you might want to take a slightly different approach. Providing your current machines have at least 2 cores, you could set up one as the main cloud server, and add the others as a cluster to give you your 8GB of RAM and 8 cores of processor. As long as you were happy to run very small cloud instances, you could still run these as full cloud systems.
I can tell you that four years ago, the University of Glasgow developed a cloud simulator using early Raspberry Pi processors, some 56 of them as I recall. They had to write various pieces of simulation software to replicate what happens in a cloud environment, but it shows what can be done. The link to the paper is here:
http://eprints.gla.ac.uk/83064/1/83064.pdf
The other option is to break down the functions of cloud and use software such as ownCloud to replicate cloud storage and file sharing. They only usually operate a single cloud function, and I can tell you it is not nearly as interesting as having a full cloud system to play with. You will learn so much more with one of those. That would be my advice to you.
Just remember, that if you do that, you should learn a great deal from building the cluster. However, once you have sourced a suitable server, you will want to rebuild the whole system, using the server at the core. You can then either add your current machines individually to act as specific nodes; add your cluster as a whole to the server; or add the current machines to the server, allowing the server to cluster them for you.
The first option would allow you to designate specific machines as specific admin machines, such as cluster controller, storage controller or node controller, leaving the server to service all your cloud instances. The second option would leave everything to the server, and the cluster of existing machines would be available to that server as a resource. The third option would allow you to have the option of both of the above, with possibly minor efficiency savings, as the server would handle all the software, leaving a little more compute free in the existing machines.
Whichever way you choose to go, you will learn a significant amount along the way, and you will have a very usable cloud system to run and test a variety of different tasks. It should also be a lot of fun, too.
Best wishes for your plan, and do let me know how you get on.