The various blocks that comprise a microprocessor have to transmit and receive data. If there's no timing information (a clock signal) then the transmission and reception of the data could not be performed in a structured way. For example, in a bus where data are transmitted and also received, first data have to be received, then data have to be transmitted, etc. All these procedures have to be performed at the propper time and this is a quite simple explanation for the usage of timing.
Clocking is needed on physical level for data transfer synchronization and, as we know, define processing speed. It is for digital computing systems. Analog computers, as i know, don't use clock signals. All operations is supplied almost in real time.
The main reason for the clock signal in most digital designs (including microprocessors) is that it is needed for synchronization of most of the interacting blocks following the “synchronous digital design” paradigm which states that the whole temporal activity must be referred to the edges of a special digital signal called clock. In few words we could say that:
Fully Synchronous designs are quite predictable and are suitable for most EDA tools.
On the contrary,
Asynchronous design must rely on extensive and detailed analog information about the circuitry behavior what is usually very difficult to obtain and analyze regarding VLSI circuits.
A vast experience in digital design suggests that synchronous designs must be the rule, while asynchronous designs should be the exception and must be rigorously justified.
The clock signal can be consider as the guidance for the microprocessor to perform its tasks since any operation must begin and end according to clock signal which represents the starting and ending point for the task, consequently the clocking signals prevents the microprocessor tasks from merge with each other.
Computing devices can be made using asynchronous circuits. What that means is a signal is generated by the computing or decision making logic that indicates the operation is complete and the result is valid. A computer can then be made up from such asynchronous subassemblies. One problem with this is most logic verification tools are designed for clocked circuits. Another problem is there isn't a tremendous amount to be gained since most computers are designed with subunits arranged in a registered pipeline fashion where each segment of the pipe is designed to use the same amount of time as the other segments so a master clock can control the flow of information. If such a pipeline were asynchronous it would still be held up by the slower segments.
This situation is likely to change since most microprocessors seem to have hit the ~3GHz speed limit and perhaps a bit more can be gained using asynchronous logic. Formal verification has tackled one of the banes of Asynchronous logic, deadlocks, when under some conditions the completion signal cannot be generated. (http://arxiv.org/abs/1304.7859).
K take it like this on a time scale if clock is 1Hz, and data is 1010101010, and the same data for 2Hz clock will be 1100110011001100, so how will you interpret it, without knowing the clock or any timing signal.
If I understand you the general solution is you encode the data in such a way that a pattern such as 10101010 does not occur in the data itself. This header synchronizes the self clocking for the remainder of the data burst.
asynchronous data manipulation and calculation can also be done. The probable answer to this can be as follows well I am a newbie to this but this is how I understand. The example is of a VHDL program for division algorithm that was to be implemented in VHDL:
I wrote a vhdl program for division algorithm, in that program i came to know that my data was corrupted because before i cud process the data, my input was changing. In other words if the propagation delay is more than that of processing time then there will be a scenario of garbage data, thus to get correct results I had to synchronize the data with clock. So I believe the same is required in microprocessors too.
Hitesh, In your case you would have to create a completion signal that would 'clock' your data into a register to be saved or into where it was going to be used. In some cases these completion signals are not easy to create and require a significant amount of extra logic. These signals chain backward to control traffic and in any traffic situation the weakest link (longest duration unit) will hold up the traffic. This means that you have to be careful how you parse your problem into units in a manner not too different from clocked registered to register architecture.
It is also possible to overlap inputs before the output is generated in a unit if permission signals are also generated by each unit. These would also have to be chained backwards since you have to know that all upstream units are ready to take inputs. You also have to insure that completion signals are not thwarted by new inputs allowed by permission signals. This logic can be very tricky to implement and most importantly to verify.
Verification (formal and regression suites) have been difficult with asynchronous machines since most available software is for verification of clocked logic. This is an area that needs more attention but probably won't get it unless the industry is forced in this direction to squeeze the last ounce of computational speed out of microprocessors.
One advantage of asynchronous logic is it ages with grace. Current microprocessors slow down due to Negative Bias Temperature Instability (NBTI) and this is generally not noticed until the slow down is below the the margins of the system clock and it starts making errors. Since asynchronous logic knows when it is done the result will always be completed and not cut short by a system clock. Most of us haven't noticed this problem because we don't own a computer long enough and any noticed slowdown in performance was probably due to software issues. One study (http://userwww.sfsu.edu/necrc/files/thesis/thesis_report_Milana.pdf) of 32nm CMOS put the NBTI slowdown at 1.7% for two years of continuous operation. If correct, this is very significant and has important ramifications for servers, cloud computing, and home computing if you expect to own your computer for an extended period.