III. SERVER CONSOLIDATION This sectionshows the taxonomy on server consolidation frameworks, a review of serverconsolidation frameworks, and a comparison of existing frameworks based onparameters selected from literature A. Taxonomy of serverconsolidation frameworks This sectionpresents the taxonomy for the classifying of the server consolidationframeworks. Server consolidation frameworks are divided based on five commoncharacteristics between server consolidation frameworks which includingResource assignment policy, Architecture, Colocation criteria, Migrationtriggering point, and Migration model 2. Resource assignment policy attributesare either static or dynamic. The static server consolidation methodpre-assigns maximum resources to the VM upon its creation. The architectureparameter attribute describes server consolidation framework design.
But, thosecentralized server consolidation frameworks are disposed to single failurepoint and which are unreliable. The co-location criteria attribute defines thecriterion opted to co-host multiple VMs within a server. VM co-locationcriteria can be defined in terms of shared memory, communication bandwidthbetween VMs, power efficiency, and sufficient resource availability to decideon the appropriate time to migrate a VM. A migration model describes themigration pattern chosen to emigrate the VM between servers. During serverconsolidation, VMs are migrated either using pre-copy migration pattern or postcopy method. B. A review of serverconsolidation frameworks VM depends oncommunication cost in order to improve the performance of I/O and non-I/Oapplications.
The Com-munication cost is a cause of communication rate and endto end network delay. The representation of communication cost betweendifferent to identify intensive VMs in order to form a VM cluster. The costtree representing the communication cost between VMs serves in order to placeVMs according to the communication distance between VMs when crossed .Unwanted VM migrationdestroys by to decrease the SLA vi-olation. The framework lacks in consideringthe effect of CPU and memory workloads during VM placement. According to theresource memory workloads damage system performance C. Comparison ofserverconsolidation frameworks Many VMmigration approaches have optimized application downtime and total migrationduration by employing optimiza-tion and avoiding aggressive migrationtermi-nation. More-over, an optimization method presents additional overhead onshared resources like CPU, memory, or cache while optimizing VM migrationperformance parameters such as downtime,total migra-tion time, andapplication QoS.
Illustrates a quali-tative comparison of VM migration schemesbased on selected parameters to highlight commonalities and variances inexist-ing bandwidth optimization schemes. Migration optimization exploitsdeduplication , compression , fingerprinting , and dynamic self-ballooning toimprove application and network performance. So, VM migration approaches canuse optimized network bandwidth. IV. VIRTUAL MACHIN EMIGRATION OPTIMIZATION This sectionpresents and compares VM migration op-timization schemes that considerbandwidth, DVFS-enabled power, and storage optimization to reduce the sideeffects of VM migration process. VM migration through LAN abuses networkattached storage (NAS) architecture to share the storage between communicatingservers.
But, migrating a VM across WAN boundaries requires migrating largesized storage in addition to VM memory over intermittent links A. Bandwidth optimization This section discusses effectively using of limited network capacity toenhance application performance during the VM migration process. It also showsa thematic taxonomy evalua-tion of existing schemes and comparisons betweenbandwidth optimization schemes. 1) Taxonomyof bandwidth optimization schemes: Different bandwidth optimization live VMmigration schemes result in varying application downtime and total migrationtime based on the nature of workload hosted within the migrant VM, type ofnetwork link, number of con-current migrant VMs, and type of hypervisorselected to manage server resources.
The proposed scheme applies binaryXOR-based RLE (XBRLE) delta compression to improve VM migration performance.Prior to triggering migration, a guest kernel conveys soft page addresses tothe VMM. For further improvement, the delta page is compressed using a lightweight compression algorithm.2) Reviewof bandwidth optimization schemes: An opti-mized post-copy VM migration schemewas proposed that exploits on-demand paging, active push, pre-paging, anddynamic self-ballooning optimizations to pre-fetch memory pages at the receiverhost. Besides, growing bubbles around the pivot memory page to transferneighboring memory pages does not always improve VM migration performance,especially when write-intensive applications are hosted within migrated VMs.Active push transfers memory pages to the target server and ensures that everypage is sent exactly once from the source server. This scheme progresses bytransferring CPU registers and device states to the receiver host prior to VMmemory content migration.
3) Comparisonof bandwidth optimization schemes: Many VM migration approaches have optimizedapplication down-time and total migration duration by employing optimizationand avoiding aggressive migration termi-nation a case of pre-copy. Moreover, anoptimization method presents additional overhead on shared resources like CPU,memory, or cachewhile optimizing VMmigration performance parameters such as downtime, total migra-tion time, andapplication QoS. illustrates a qualitative comparison of VM migration schemesbased on selected parameters to highlight commonalities and variances inexisting bandwidth optimization schemes. Live VM migration schemes followeither pre-copy post-copy, or hybrid migration patterns to migrate VMs acrossservers. B. DVFS-enabled poweroptimization VM migrationhelps reduce power consumption budget by migrating VMs.
But, power consumptionwithin a server during VM migration overcomes the limited support which offeredby CPU architecture for DVFS application. The pro-posed approach considers VMCAP value to decrease power consumption. In order to handle the proposed schemethat has reduced processor clock rate to power consumption within a certainlimit. DVFS technology makes use of the relation of voltage, frequency, andprocessor speed to adjust CPU clock rate 3.
A power capping based VMmigration scheme was discussed in that prioritizes the VM migration. PMapper isa power-aware application placement framework that considers power usage andmigration cost while deciding on application placement within a DC. Moreover,during VM migration, the power manager adaptively applies DVFS to balance powerefficiency and SLA guarantee. The PMapper architecture is based on threemodules, namely performance manager, power manager, and monitoring engine. Foroptimal VM placement while considering power efficiency and application SLA,PMapper uses bin packing heuristics to map VMs on a suitable server.Furthermore, the monitoring engine module gathers server/VM resource usage andpower state statistics before forwarding them to the power and performance.Furthermore, it sorts the servers based on resource usage and power consumptionto choose the most suitable server based on resource availability and powerconsumption estimates to host the workload. It also identifies underutilizedservers according to resource usage statistics and emigrates the load to otherservers to shut down servers for power efficiency.
It allocates workload basedon minimizing energy consumption policy. A scheduling algorithm was proposed toutilize DVFS methods to limit the power consumption budget within a DC. Theproposed scheduler dynamically checks application processing demands andoptimizes energy consumption using DVFS. Based on adaptive DVFS-enabled powerefficiency controller, hierarchical controller for power capping, integratepower efficiency with power capping. The control system architecture designconsists of an efficiency controller, server capper, and group capper.
Theefficiency controller is responsible for tracking the demands of individualservers, But the server capper throttles power consumption according tofeedback. In addition to power distribution unfairness, the proposed schemeassumes the server group configuration and power supply structure are flat.However, they are actually hierarchical of the group capper throttles powerconsumption at the server group level.C. Storage optimization The proposedmodel consists of two components, target server and proxy server connected tosource and destination servers through a network block device connection.When-ever the destination storage is completely synchronized with the source,the connection is demolished to release source server resources. Prototypeimplementation of I/O blocked live storage migration rapidly relocates diskblocks within WAN links with minimum impact on I/O performance.
The on demandmethod fetches memory blocks from the source when they are not available at thedestination server. However, storage sharing between sender and target serversat distant locations over the Internet. The experiments revealed that I/Operformance improved significantly compared to conventional remote storagemigration methods in terms of total migration time and cache hit ratio.Therefore, to efficiently utilize band-width capacity, the background copymethod is improved with compression using. Storage migration schemescomparison.
Introducing compression enhances network performance in terms ofbandwidth utilization. LZO algorithm to reduce total transferred data forstorage synchronization and migration time. In case of connection failureduring storage migration, the hosted application’s performance significantlydegrades and the system may crash. The limited WAN bandwidth degrades the livestorage migration process. Bitmap based storage migration scheme has employedsimple hash algorithm such as SHA-1 to create and transfer a list of storageblocks called sent bitmapto the destination server.
However, in order tomigrate back VMs after server maintenance, an intelligent incremental migration(IM) approach is proposed that only transfers blocks that are updated aftermigration from the source to reduce migration time and total migration data.Syn-chronous replication is costly as it affects running applications, network,and system resources a cooperative, context-aware migration approach wasproposed, which enables the migration management system to arrange DC migrationacross server platforms. V.
CONCLUSION In this paper,the notions of cloud computing, VM mi-gration, storage migration, serverconsolidation, and dynamic voltage fre-quency scaling based power optimizationare dis-cussed. The large size of VM memory, unpredictable workload nature,limited bandwidth capacity, restricted resource sharing, inability toaccurately predict application demands, and ag-gressive migra-tion decisions,call for dynamic, lightweight, adaptive, and optimal VM migration designs inorder to improve application perfor-mance. Furthermore, the inclusion ofheterogeneous, dedicated, and fast communication links for storage and VMmemory transferring can augment the application performance by reducing totalmigration time and application service downtime.
Several server consolidationframeworks colocate.The VM memory size, unpredictable workload nature, limitedbandwidth capacity, restricted re-source sharing, inability to accuratelypredict application de-mands, and aggressive migration decisions, call fordynamic,lightweight,adaptive, and optimal VM migration designs in order to improve applicationperformance. Furthermore, the inclusion of heterogeneous dedicated, and fastcommunication links for storage and VM memory transferring can augment theapplication performance by reducing total migration time and applicationservice downtime. The lightweight VM migration design can reduce the overalldevelopment efforts, augments the application performance, and can speed-up theprocessing in CDC.
Furthermore, the incorporation of dynamic workload behavior.