Deploying enterprise applications using a tiered architecture is increasingly becoming a standard both in virtualized and non-virtualised environments. A major challenge in managing such applications is the ability to be able to quickly detect performance bottleneck/anomaly/surprises and identify which of the tiers is causing the perceived constriction.

The interest here is knowing what techniques have been used, or are being used for such tasks, especially in virtualised environment where applications are deployed in black-boxes (VM), and the infrastructure provider can only detect these changes and identify affected tier only by externally observing the performance of the applications.

In this case, how can one use external system level performance "vital signs" of virtual machines to be able to quickly detect anomalies, and identify affected tier?

Similar questions and discussions