III.   
SERVER CONSOLIDATION

 

This section
shows the taxonomy on server consolidation frameworks, a review of server
consolidation frameworks, and a comparison of existing frameworks based on
parameters selected from literature

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

 

A. Taxonomy of server
consolidation frameworks

 

This section
presents the taxonomy for the classifying of the server consolidation
frameworks. Server consolidation frameworks are divided based on five common
characteristics between server consolidation frameworks which including
Resource assignment policy, Architecture, Colocation criteria, Migration
triggering point, and Migration model 2. Resource assignment policy attributes
are either static or dynamic. The static server consolidation method
pre-assigns maximum resources to the VM upon its creation. The architecture
parameter attribute describes server consolidation framework design. But, those
centralized server consolidation frameworks are disposed to single failure
point and which are unreliable. The co-location criteria attribute defines the
criterion opted to co-host multiple VMs within a server. VM co-location
criteria can be defined in terms of shared memory, communication bandwidth
between VMs, power efficiency, and sufficient resource availability to decide
on the appropriate time to migrate a VM. A migration model describes the
migration pattern chosen to emigrate the VM between servers. During server
consolidation, VMs are migrated either using pre-copy migration pattern or post
copy method.

 

B. A review of server
consolidation frameworks

 

VM depends on
communication cost in order to improve the performance of I/O and non-I/O
applications. The Com-munication cost is a cause of communication rate and end
to end network delay. The representation of communication cost between
different to identify intensive VMs in order to form a VM cluster. The cost
tree representing the communication cost between VMs serves in order to place
VMs according to the communication distance between VMs when crossed

 

.Unwanted VM migration
destroys by to decrease the SLA vi-olation. The framework lacks in considering
the effect of CPU and memory workloads during VM placement. According to the
resource memory workloads damage system performance

 

C. Comparison of
serverconsolidation frameworks

 

Many VM
migration approaches have optimized application downtime and total migration
duration by employing optimiza-tion and avoiding aggressive migration
termi-nation. More-over, an optimization method presents additional overhead on
shared resources like CPU, memory, or cache while optimizing VM migration
performance parameters such as downtime,

total migra-tion time, and
application QoS. Illustrates a quali-tative comparison of VM migration schemes
based on selected parameters to highlight commonalities and variances in
exist-ing bandwidth optimization schemes. Migration optimization exploits
deduplication , compression , fingerprinting , and dynamic self-ballooning to
improve application and network performance. So, VM migration approaches can
use optimized network bandwidth.

 

IV. VIRTUAL MACHIN EMIGRATION OPTIMIZATION

 

This section
presents and compares VM migration op-timization schemes that consider
bandwidth, DVFS-enabled power, and storage optimization to reduce the side
effects of VM migration process. VM migration through LAN abuses network
attached storage (NAS) architecture to share the storage between communicating
servers. But, migrating a VM across WAN boundaries requires migrating large
sized storage in addition to VM memory over intermittent links

 

A. Bandwidth optimization

 

This section discusses effectively using of limited network capacity to
enhance application performance during the VM migration process. It also shows
a thematic taxonomy evalua-tion of existing schemes and comparisons between
bandwidth optimization schemes.

 

1)   Taxonomy
of bandwidth optimization schemes: Different bandwidth optimization live VM
migration schemes result in varying application downtime and total migration
time based on the nature of workload hosted within the migrant VM, type of
network link, number of con-current migrant VMs, and type of hypervisor
selected to manage server resources. The proposed scheme applies binary
XOR-based RLE (XBRLE) delta compression to improve VM migration performance.
Prior to triggering migration, a guest kernel conveys soft page addresses to
the VMM. For further improvement, the delta page is compressed using a light
weight compression algorithm.

2)   Review
of bandwidth optimization schemes: An opti-mized post-copy VM migration scheme
was proposed that exploits on-demand paging, active push, pre-paging, and
dynamic self-ballooning optimizations to pre-fetch memory pages at the receiver
host. Besides, growing bubbles around the pivot memory page to transfer
neighboring memory pages does not always improve VM migration performance,
especially when write-intensive applications are hosted within migrated VMs.
Active push transfers memory pages to the target server and ensures that every
page is sent exactly once from the source server. This scheme progresses by
transferring CPU registers and device states to the receiver host prior to VM
memory content migration.

 

3)   Comparison
of bandwidth optimization schemes: Many VM migration approaches have optimized
application down-time and total migration duration by employing optimization
and avoiding aggressive migration termi-nation a case of pre-copy. Moreover, an
optimization method presents additional overhead on shared resources like CPU,
memory, or cache

while optimizing VM
migration performance parameters such as downtime, total migra-tion time, and
application QoS. illustrates a qualitative comparison of VM migration schemes
based on selected parameters to highlight commonalities and variances in
existing bandwidth optimization schemes. Live VM migration schemes follow
either pre-copy post-copy, or hybrid migration patterns to migrate VMs across
servers.

 

B. DVFS-enabled power
optimization

 

VM migration
helps reduce power consumption budget by migrating VMs. But, power consumption
within a server during VM migration overcomes the limited support which offered
by CPU architecture for DVFS application. The pro-posed approach considers VM
CAP value to decrease power consumption. In order to handle the proposed scheme
that has reduced processor clock rate to power consumption within a certain
limit. DVFS technology makes use of the relation of voltage, frequency, and
processor speed to adjust CPU clock rate 3. A power capping based VM
migration scheme was discussed in that prioritizes the VM migration. PMapper is
a power-aware application placement framework that considers power usage and
migration cost while deciding on application placement within a DC. Moreover,
during VM migration, the power manager adaptively applies DVFS to balance power
efficiency and SLA guarantee. The PMapper architecture is based on three
modules, namely performance manager, power manager, and monitoring engine. For
optimal VM placement while considering power efficiency and application SLA,
PMapper uses bin packing heuristics to map VMs on a suitable server.
Furthermore, the monitoring engine module gathers server/VM resource usage and
power state statistics before forwarding them to the power and performance.
Furthermore, it sorts the servers based on resource usage and power consumption
to choose the most suitable server based on resource availability and power
consumption estimates to host the workload. It also identifies underutilized
servers according to resource usage statistics and emigrates the load to other
servers to shut down servers for power efficiency. It allocates workload based
on minimizing energy consumption policy. A scheduling algorithm was proposed to
utilize DVFS methods to limit the power consumption budget within a DC. The
proposed scheduler dynamically checks application processing demands and
optimizes energy consumption using DVFS. Based on adaptive DVFS-enabled power
efficiency controller, hierarchical controller for power capping, integrate
power efficiency with power capping. The control system architecture design
consists of an efficiency controller, server capper, and group capper. The
efficiency controller is responsible for tracking the demands of individual
servers, But the server capper throttles power consumption according to
feedback. In addition to power distribution unfairness, the proposed scheme
assumes the server group configuration and power supply structure are flat.
However, they are actually hierarchical of the group capper throttles power
consumption at the server group level.

C. Storage optimization

 

The proposed
model consists of two components, target server and proxy server connected to
source and destination servers through a network block device connection.
When-ever the destination storage is completely synchronized with the source,
the connection is demolished to release source server resources. Prototype
implementation of I/O blocked live storage migration rapidly relocates disk
blocks within WAN links with minimum impact on I/O performance. The on demand
method fetches memory blocks from the source when they are not available at the
destination server. However, storage sharing between sender and target servers
at distant locations over the Internet. The experiments revealed that I/O
performance improved significantly compared to conventional remote storage
migration methods in terms of total migration time and cache hit ratio.
Therefore, to efficiently utilize band-width capacity, the background copy
method is improved with compression using. Storage migration schemes
comparison. Introducing compression enhances network performance in terms of
bandwidth utilization. LZO algorithm to reduce total transferred data for
storage synchronization and migration time. In case of connection failure
during storage migration, the hosted application’s performance significantly
degrades and the system may crash. The limited WAN bandwidth degrades the live
storage migration process. Bitmap based storage migration scheme has employed
simple hash algorithm such as SHA-1 to create and transfer a list of storage
blocks called sent bitmapto the destination server. However, in order to
migrate back VMs after server maintenance, an intelligent incremental migration
(IM) approach is proposed that only transfers blocks that are updated after
migration from the source to reduce migration time and total migration data.
Syn-chronous replication is costly as it affects running applications, network,
and system resources a cooperative, context-aware migration approach was
proposed, which enables the migration management system to arrange DC migration
across server platforms.

 

V.  CONCLUSION

 

In this paper,
the notions of cloud computing, VM mi-gration, storage migration, server
consolidation, and dynamic voltage fre-quency scaling based power optimization
are dis-cussed. The large size of VM memory, unpredictable workload nature,
limited bandwidth capacity, restricted resource sharing, inability to
accurately predict application demands, and ag-gressive migra-tion decisions,
call for dynamic, lightweight, adaptive, and optimal VM migration designs in
order to improve application perfor-mance. Furthermore, the inclusion of
heterogeneous, dedicated, and fast communication links for storage and VM
memory transferring can augment the application performance by reducing total
migration time and application service downtime. Several server consolidation
frameworks colocate.The VM memory size, unpredictable workload nature, limited
bandwidth capacity, restricted re-source sharing, inability to accurately
predict application de-mands, and aggressive migration decisions, call for
dynamic,

lightweight,
adaptive, and optimal VM migration designs in order to improve application
performance. Furthermore, the inclusion of heterogeneous dedicated, and fast
communication links for storage and VM memory transferring can augment the
application performance by reducing total migration time and application
service downtime. The lightweight VM migration design can reduce the overall
development efforts, augments the application performance, and can speed-up the
processing in CDC. Furthermore, the incorporation of dynamic workload behavior.