Inferring the Network Latency Requirements of Cloud Tenants
Venue
15th Workshop on Hot Topics in Operating Systems (HotOS XV), USENIX Association (2015)
Publication Year
2015
Authors
Jeffrey C Mogul, Ramana Rao Kompella
BibTeX
Abstract
Cloud IaaS and PaaS tenants rely on cloud providers to provide network
infrastructures that make the appropriate tradeoff between cost and performance.
This can include mechanisms to help customers understand the performance
requirements of their applications. Previous research (e.g., Proteus and Cicada)
has shown how to do this for network-bandwidth demands, but cloud tenants may also
need to meet latency objectives, which in turn may depend on reliable limits on
network latency, and its variance, within the cloud providers infrastructure. On
the other hand, if network latency is sufficient for an application, further
decreases in latency might add cost without any benefit. Therefore, both tenant and
provider have an interest in knowing what network latency is good enough for a
given application. This paper explores several options for a cloud provider to
infer a tenants network-latency demands, with varying tradeoffs between
requirements for tenant participation, accuracy of inference, and instrumentation
overhead. In particular, we explore the feasibility of a hypervisor-only mechanism,
which would work without any modifications to tenant code, even in IaaS clouds.
