w gloc64 - Virtualization for Newbies, Książki IT

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • lilyth.htw.pl
  • Podobne

     

    w gloc64 - Virtualization for Newbies, Książki IT

    [ Pobierz całość w formacie PDF ]
    //-->Expert Reference Series of White PapersVirtualization forNewbies1-800-COURSESwww.globalknowledge.comVirtualization for NewbiesSteve Baca VCP, VCI, VCAP, Global Knowledge InstructorIntroductionVirtualization is an umbrella term that continues to evolve to include many different types that are used in manydifferent ways in production environments. Originally virtualization was done by writing software and firmwarecode for physical equipment so that the physical equipment could run multiple jobs at once. With the success ofVMware and its virtualization of x86 hardware, the term virtualization has grown to include not just virtualizingservers, but whole new areas of IT. This white paper takes a look at the origins of virtualization and how someof the historical development has spurred on today’s virtualization. In addition, we will discuss different types ofvirtualization that are being utilized in the marketplace today and a listing of some of the leading vendors.Why Virtualize?In general, the idea behind virtualization is to make many from one. As an example, from one physical serverusing virtualization software, multiple virtual machines can run as if each virtual machine were a separate physi-cal box. In data centers, before virtualization, one or more applications and an operating system would run ontheir own unique physical server. Since each one of those physical servers needed floor or rack space, there wasa problem of the growing size and number of data centers that businesses needed in IT. Using virtualization toconsolidate the number of physical servers reversed the trend of this sprawl, and companies began to see a costsavings.From the system administrator’s point of view, another reason to virtualize is the ability to quickly add morevirtual machines as needed, without having to purchase new physical servers. The delay in obtaining new serv-ers varies widely with each company and, in some environments, could be quite lengthy. With virtualization, thelength of this process can be greatly reduced because the physical server is already up and running in produc-tion. The system administrator can quickly create a brand new virtual machine by adding the virtual machine toan existing physical host. Thus, you can run many virtual machines on one physical server.A third reason to virtualize is for better resource utilization. Before virtualization, it was not unusual to see aphysical server using less than five percent, or even ten percent, of its CPU and/or memory. As an example, con-sider the case where a physical server was purchased to run an application that only runs during the evening.When the application is not processing, such as in the morning or afternoon, then the physical box is sitting idle,which is a tremendous waste of resources. Thus, virtualizing the application that is used only at night runs atnight, leaves that virtual machine to run on the same server with other virtual machines that utilize resourcesduring the morning or afternoon. The virtual machines will balance each other’s resource usage. Since onevirtual machine application will run during the day, and the other virtual machine’s application will process atCopyright ©2013 Global Knowledge Training LLC. All rights reserved.2night, the physical server will better utilize its resources. Resources such as memory and CPU on a server canbe safely utilized by multiple virtual machines processing up to 75 percent to 80 percent on a continuous basis,with server side virtualization from vendors such as VMware. The advantage is the utilization of the resourceswill be far more efficient with virtualization, than if the applications ran on individual physical servers.A fourth reason to use virtualization is that it can utilize new features that create a more reliable environment.As an example, VMware offers a feature called High Availability (HA). This additional feature is used when aphysical server fails. After HA has determined that the physical server is down, it can restart the virtual ma-chines on surviving servers. Therefore, an application will experience less down time using HA as an automatedapproach to physical server failure. Other vendors have their own features written into their code that offerdifferent forms of reliability as well.These are a few of the reasons to virtualize, and there are definitely more. Now, let’s turn to the beginning ofvirtualization.Origins of VirtualizationThe origins of virtualization began with a paper presented on time-shared computers, by C. Strachey at the June1959 UNESCO Information Processing Conference. Time-sharing was a new idea, and Professor Strachey was thefirst to publish on the topic that would lead to virtualization. After this conference, new research was done, andseveral more research papers written on the topic of time-sharing began to appear. These research papers en-ergized a small group of programmers at the Massachusetts Institute of Technology (MIT) to begin to develop aCompatible Time-Sharing system (CTSS). From these first time-sharing systems attempts, virtualization was pio-neered in the early 1960s by IBM, General Electric, and other companies attempting to solve several problems.The main problem that IBM wanted to solve was that each new system that they introduced was incompatiblewith previous systems. IBM’s president, T.J. Watson, Jr., had given an IBM 704 for use by MIT and other NewEngland schools in the 1950s. Then, each time IBM built a newer, bigger processor, the system had to be up-graded, and customers were continuously being retrained whenever a new system was introduced. To solve thisproblem, IBM designed its new S/360 mainframe system to be backwards-compatible, but it was still a single-user system running batch jobs.At this time, MIT and Bell Labs were requesting time-sharing systems to solve their problem of many program-mers and very few systems to run their programs. Thus, IBM developed the System/360-40 (CP-40 mainframe)for their lab to test time-sharing. This first system, the CP-40, eventually evolved into the development andpublic release of the first commercial mainframe to support virtualization the System/360-67 (CP-67 mainframe)in 1968. The new CP-67 contained a 32-bit CPU with virtual memory hardware. The CP-67 mainframe’s operat-ing system was named Control Program/Console Monitor System (CP/CMS). The early hypervisor gave eachmainframe user a console monitor system (CMS), essentially a single-user operating system, which did not haveto be complex because it was only supporting one user. The hypervisor provided the resources while the CMssupported the time-sharing capabilities, allocation, and protection. CP-67 enabled memory sharing across virtualmachines while giving each user their own virtual memory. Thus, the CP operating system’s approach providedeach user with an operating system at the machine instruction level.Virtualization continues to be used on the mainframe system even today, but it took nearly two decades beforeCopyright ©2013 Global Knowledge Training LLC. All rights reserved.3virtualization would become heavily used outside of the mainframe world. Although IBM had provided ablueprint for virtualization, the client-server model that took over from the mainframe was inexpensive and notpowerful enough to run multiple operating systems. These issues for the client-server model meant that thesenew systems could not support virtualization, and the idea of virtualization would disappear for many years.Eventually, the hardware performance increased to a point where significant savings could be realized by virtu-alizing X86. The concepts of virtualization that were developed on the mainframe were eventually ported overto X86 servers by VMware in 1998, and a new era of virtualization began.Types and Major Players in VirtualizationAlthough some form of virtualization has been around since the mid-1960s, it has evolved over time, whileremaining close to its roots. Much of the evolution in virtualization has occurred in just the last few years, withnew types being developed and commercialized. It can be difficult to restrict the types of virtualization to just afew areas with the release of so many different types and no true standard definition. Therefore, the definitionof virtualization can be limited “to make many from one,” and also limited to the most popular types of virtu-alization that are used in business today. For the purposes of this paper, the different types of virtualization areconfined to Desktop Virtualization, Application Virtualization, Server Virtualization, Storage Virtualization, andNetwork Virtualization.Desktop VirtualizationThe virtualization of the desktop, which sometimes is referred to as Virtual Desktop Infrastructure (VDI), is wherea desktop operating system, such as Windows 7, will run as a virtual machine on a physical server with othervirtual desktops. The processing of multiple virtual desktops occurs on one or a few physical servers, typicallyat the centralized data center. The copy of the OS and applications that each end user utilizes will typically becached in memory as one image on the physical server.If you go back to the IBM mainframe era, each user would use the mainframe to do the centralized process-ing for their terminal session, so the user’s environment consisted of a monitor and a keyboard with all of theprocessing happening back on the centralized mainframe. The monitor was not in color, which meant programsthat used color graphics were not available on a terminal connected to a mainframe. However, in the 1990s, ITstarted to migrate to the inexpensive desktop system where each user would have a physical computer . The PCwould consist of a color monitor, keyboard, and mouse, with much of the processing and the operating systemrunning locally, using the physical desktop’s central processing unit (CPU) and physical random access memory(RAM) instead of using the centralized mainframe to do the processing.In today’s VDI marketplace, there are two dominate vendors, VMware Horizon View and Citrix Xen Desktop,vying to become the leader in the desktop virtualization marketplace. Both vendors have the ability to projectgraphic displays with rapid response from the mainframe. The desktops also come with a mouse, and bothsolutions make the end-user’s experience feel that the remote desktop is local. Thus, the performance of theremote desktop and how the end-user accesses their applications should be no different than if they were usinga physical desktop. Both VMware Horizon View and Citrix Xen Desktop each have a strong footprint and are themost-utilized choices for desktop virtualization in business today.Copyright ©2013 Global Knowledge Training LLC. All rights reserved.4Application VirtualizationApplication virtualization uses software to package an application into a “single executable and run anywhere”type of application. The software application is separated from the operating system and runs in what is referredto as a “sandbox.” Virtualizing the application allows things like the registry and configuration changes to ap-pear to run in the underlying operating system, although they really are running in the sandbox. There are twotypes of application virtualization: remote and streaming of the application. A remote application will run ona server, and the client uses some type of remote display protocol to communicate back to the client machine.Since a large number of system administrators and users have experience running remotely, it can be fairly easyto set up remote displays for applications. With a streaming application, you can run one copy of the applicationon the server, and then have many client desktops access and run the streaming application locally. By stream-ing the application, the upgrade process is easier, since you just set up another streaming application with thenew version, and have the end users point to the new version of the application.Some of the application virtualization products in the marketplace are Citrix XenApp, Novell ZENworks Applica-tion Virtualization, and VMware ThinApp,Server VirtualizationServer virtualization allows for many virtual machines to run on one physical server. The virtual servers share theresources of the physical server, which leads to better utilization of the physical servers resources. The resourcesthat the virtual machines share are CPU, memory, storage, and networking. All of these resources are providedto the virtual machines through the hypervisor of the physical server. The hypervisor is the operating system andsoftware that operate on the physical box. Each virtual machine runs independently of the other virtual ma-chines on the same box. The virtual machines can have different operating systems and are isolated from eachother. The server virtualization offers a way to consolidate applications that used to run on individual physicalservers, and now with the hypervisor software runs on the same physical server represented by virtual machines.Server virtualization is what most people think of when they think of virtualization, due to VMware’s vSphere,which has a large percentage of the marketplace. In addition, some of the other vendors are, Citrix XenServer,Microsoft’s Hyper-V, and Red Hat’s Enterprise Virtualization.Storage VirtualizationStorage virtualization is the process of grouping physical storage using software to represent what appearsto be a single storage device in a virtual format. Correlations can be made between storage virtualization andtraditional virtual machines, since both take physical hardware and resources and abstract access to them. Thereis a difference between a traditional virtual machine and a virtual storage. The virtual machine is a set of files,while virtual storage typically runs in memory on the storage controller that is created using software.A form of storage virtualization has been incorporated into storage features for many years. Features such asSnapshots and RAID take physical disks and present them in a virtual format. These features can provide a for-mat to help with performance or add redundancy to the storage that is presented to the host as a volume. Thehost sees the volume as a big disk, which fits the description of storage virtualization.Copyright ©2013 Global Knowledge Training LLC. All rights reserved.5 [ Pobierz całość w formacie PDF ]

  • zanotowane.pl
  • doc.pisz.pl
  • pdf.pisz.pl
  • mement.xlx.pl
  • Designed by Finerdesign.com