US20130086298A1 - Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion - Google Patents
Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion Download PDFInfo
- Publication number
- US20130086298A1 US20130086298A1 US13/252,676 US201113252676A US2013086298A1 US 20130086298 A1 US20130086298 A1 US 20130086298A1 US 201113252676 A US201113252676 A US 201113252676A US 2013086298 A1 US2013086298 A1 US 2013086298A1
- Authority
- US
- United States
- Prior art keywords
- network adapter
- virtual machine
- state data
- hardware state
- memory
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Definitions
- the present disclosure relates to migrating a virtual machine, which generates stateful offload data packets, from a first system to a second system. More particularly, the present disclosure relates to extracting hardware state data from a source network adapter and copying the extracted hardware state data to a destination system during the migration.
- Modern communication network adapters support “stateful” offload data transmission formats in which the network adapters perform particular processing tasks in order to reduce a host system's processing load.
- Typical stateful offload formats include Remote Direct Memory Access (RDMA), Internet Wide RDMA Protocol (iWARP), Infiniband (IB), and TCP Offload Engine (TOE).
- RDMA Remote Direct Memory Access
- iWARP Internet Wide RDMA Protocol
- IB Infiniband
- TOE TCP Offload Engine
- the network adapters restrict the “state” for any given virtual machine connection to the context of the network adapter's instance corresponding to the virtual machine.
- Stateful offload information that represents this context includes hardware state data that describes hardware properties on a per virtual machine basis, such as information corresponding to connections, registers, memory registrations, structures used to communicate with the virtual machine (Queue Pairs, Completion Queues, etc.), and other miscellaneous data structures, such as address resolution protocol (ARP) tables.
- hardware state data that describes hardware properties on a per virtual machine basis, such as information corresponding to connections, registers, memory registrations, structures used to communicate with the virtual machine (Queue Pairs, Completion Queues, etc.), and other miscellaneous data structures, such as address resolution protocol (ARP) tables.
- ARP address resolution protocol
- a migration agent receives a message to migrate a virtual machine from a first system to a second system.
- the first system extracts hardware state data stored in a native format from a memory area located on first system's network adapter.
- the hardware state data is utilized by the first system's network adapter to process data packets generated by the virtual machine.
- the virtual machine is migrated to the second system, which includes copying the extracted hardware state data from the first system to the second system.
- the second system configures a corresponding second network adapter by writing the copied hardware state data to a memory located on the second network adapter.
- FIG. 1 is an exemplary diagram showing a migration agent migrating a logical partition, which includes a virtual machine and native network adapter hardware state data, from a source system to a destination system;
- FIG. 2 is an exemplary diagram showing a graphical representation of discovering an suitable destination system
- FIG. 3 is an exemplary candidate table that includes host properties and corresponding network adapter property table entries
- FIG. 4 is an exemplary flowchart showing steps taken in discovering a destination system and migrating a virtual machine to the destination system;
- FIG. 5 is an exemplary flowchart showing steps taken in discovering a suitable destination system that includes a compatible host and an equivalent network adapter compared with a source system;
- FIG. 6 is an exemplary flowchart showing steps taking in a host system preparing a virtual machine for migration
- FIG. 7 is an exemplary flowchart showing steps taken in migrating a logical partition from a source system to a destination system
- FIG. 8 is an exemplary diagram showing a network adapter tracking and storing hardware state data for modules executing on a virtual machine
- FIG. 9 is an exemplary diagram showing the migration of hardware state data from a source network adapter to a destination network adapter
- FIG. 10 is an exemplary diagram showing a distributed policy service accessing a candidate table storage area to identify a suitable destination system
- FIG. 11 is an exemplary diagram showing virtual network abstractions that are overlayed onto a physical network space
- FIG. 12 is an exemplary block diagram of a data processing system in which the methods described herein can be implemented.
- FIG. 13 provides an extension of the information handling system environment shown in FIG. 12 to illustrate that the methods described herein can be performed on a wide variety of information handling systems which operate in a networked environment.
- aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- the present disclosure describes a method for migrating a virtual machine from a source system to a destination system.
- the migration includes extracting hardware state data from a source network adapter corresponding to the virtual machine, and copying the hardware state data to a destination network adapter included in the destination system.
- a system administrator has flexibility to migrate the virtual machine to the destination system as required, such as due to resolve security issues or network bandwidth issues.
- FIG. 1 is an exemplary diagram showing a migration agent migrating a virtual machine, which includes native network adapter hardware state data, from a source system to a destination system.
- Overlay network environment 100 overlays onto a physical network and utilizes logical policies to send data between virtual machines over virtual networks.
- the virtual networks are independent from physical topology constraints of the physical network (see FIG. 11 and corresponding text for further details).
- Overlay network environment 100 includes source system 105 .
- Source system 105 includes host 110 and source network adapter 150 .
- Host 110 includes hypervisor 145 , which provisions virtual machine 135 and device driver 140 .
- Virtual machine 135 utilizes device driver 140 to send stateful offload data packets to source network adapter 150 .
- the stateful offload data packets may adhere to a stateful offload format such as Remote Direct Memory Access (RDMA), Internet Wide RDMA Protocol (iWARP), Infiniband (IB), or TCP Offload Engine (TOE).
- RDMA Remote Direct Memory Access
- iWARP Internet Wide RDMA Protocol
- IB Infiniband
- TOE TCP Offload Engine
- source network adapter 150 processes the data packets utilizing hardware state data 152 and transmits the data packets to a destination virtual machine over overlay network environment 100 .
- Hardware state data 152 includes stateful information that represents source network adapter 150 's context, such as data pertaining to connections and structures used to communicate with virtual machine 135 (e.g., queue pairs, completion queues, etc.), and may also include register information, memory registrations, and other miscellaneous data structures (e.g., ARP tables, sequence numbers, retransmission information, etc.).
- hardware state data 152 includes Layer 4 (of the OSI Model) connection state information that allows source network adapter 150 to perform retransmission and packet acknowledgements, which alleviates host 110 from performing such menial tasks.
- Layer 4 of the OSI Model
- iWARP provides RDMA capability over a standard Ethernet fabric, which utilizes application buffers that are mapped to an underlying Ethernet adapter.
- a connection is made with the network adapter that initiates a TCP connection.
- TCP Once active, data on the application's outgoing buffers are encapsulated by the network adapter as TCP segments as packets are built.
- a system administrator may wish to migrate virtual machine 135 from source system 105 to a different system, such as for security purposes or network bandwidth management purposes).
- the system administrator may send a migration command to migration agent 160 (included in distributed policy service 165 ), which is responsible for discovering a suitable destination system that includes a compatible host and an equivalent network adapter that supports overlay network environment 100 .
- a compatible host is one that satisfies a migrating virtual machine's system requirements, such as CPU requirements, memory requirements, bandwidth requirements, etc.
- an equivalent network adapter is one that corresponds to the same vendor identifier and the same revision identifier as source network adapter 150 .
- Migration agent 160 proceeds through a series of discovery steps to identify destination system 115 as a suitable destination system.
- migration agent 160 utilizes a candidate table, which includes host properties and network adapter properties, for which to identify the suitable destination system (see FIGS. 3 , 5 , and corresponding text for further details).
- migration agent 160 determines that host 120 supports virtual machine 135 's system requirements and destination network adapter 190 is equivalent to source network adapter 150 (e.g., includes matching device id, firmware version, and other relevant adapter attributes).
- Hardware state data 152 In order to migrate virtual machine 135 , hardware state data 152 must also be migrated. Hardware state data 152 , however, is partially or completely opaque to device driver 140 and virtual machine 135 . As such, migration agent 160 indicates to source network adapter 150 (through device driver 140 , hypervisor 145 , or other driving agent) to extract hardware state data 152 .
- Source network adapter 150 quiesces I/O and memory activity to avoid state changes or corruption during the extraction process, and copies hardware state data 152 via device driver 140 to shared memory 142 at a specified memory block starting address.
- the memory block starting address may be negotiated as part of its initialization or provided as a parameter in the extraction command to source network adapter 150 .
- Migration agent 160 sends a migration request to source system 105 and destination system 115 to migrate virtual machine 135 .
- hypervisors 145 and 185 establish a connection to stream virtual machine 135 (includes shared memory 142 ) to host 120 , resulting virtual machine 175 and shared memory 182 .
- hypervisor 185 allocates device driver 180 to logical partition 170 , and sends a state insert command to destination network adapter 190 .
- the state insert command instructs destination network adapter 190 to retrieve the hardware state data from shared memory 182 at the memory block starting address, and load hardware state data 192 onto network adapter 190 .
- hardware state data 152 maintains its native form when stored in destination network adapter 190 , thus negating address translation steps.
- destination network adapter 190 performs a checksum to validate the hardware state data.
- destination network adapter 190 may utilize a header or individual flags to efficiently set the context.
- migration agent 160 may facilitate one or more transactions between source network adapter 150 and destination network adapter 190 to verify the equivalence of their states.
- FIG. 2 is an exemplary diagram showing a graphical representation of discovering a suitable destination system.
- migration agent 160 iteratively selects a suitable destination system based upon available hosts, compatible hosts, and equivalent network adapters.
- migration agent 160 uses a candidate table, such as that shown in FIG. 3 , to perform such iteration steps.
- the migration agent identifies available hosts 220 included in overlay network environment 100 .
- Available hosts 220 include hosts 250 - 290 , each utilizing various network adapters.
- the example in FIG. 2 shows that the migration agent determines that hosts 250 - 268 do not satisfy host requirements of the migrating virtual machine (e.g., not enough memory or bandwidth availability).
- the migration agent identifies hosts 272 - 290 as “compatible” hosts 230 , which meet or exceed host requirements of the migrating virtual machine.
- the migration agent analyzes network adapters 274 , 285 , and 295 corresponding to compatible hosts 230 in order to identify a network adapter that is equivalent to the network adapter utilized by the migrating virtual machine.
- an equivalent network adapter is one that matches the migrating virtual machine's network adapter in both device ID and vendor ID.
- the example shown in FIG. 2 shows that network adapter 295 is equivalent to the migrating virtual machine's network adapter.
- the migration agent sends a message to the source and destination systems' hypervisors to establish a connection and migrate the virtual machine from the source system to the destination system.
- FIG. 3 is an exemplary candidate table that includes host properties and corresponding network adapter property table entries.
- a migration agent (as part of a distributed policy service) manages candidate table 300 in order to track host requirements and network adapter requirements for virtual machines that execute stateful offload data transmissions.
- a local distributed policy server may manage candidate table 300 , which would include table entries at a local virtual network level.
- a root distributed policy server may manage candidate table 300 , which would include table entries at a global overlay network environment level (see FIG. 10 and corresponding text for further details).
- Candidate table 300 includes a list of table entries, which include host names (column 310 ) and host properties (column 320 ). For example, a host system may provision a particular amount of processing power, memory, and bandwidth to a virtual machine. In one embodiment, column 320 may include minimum, nominal, and/or maximum host properties.
- the table entries also include network adapter information for network adapters utilized by corresponding host systems.
- Column 330 includes network adapter identifiers and column 340 includes network adapter properties.
- the network adapter properties identify the network adapter's vendor ID and device ID. As such, the migration agent may discover an equivalent (matching) network adapter in order to migrate hardware state data in its native format to a different network adapter.
- FIG. 4 is an exemplary flowchart showing steps taken in discovering a destination system and migrating a virtual machine from a source system to the destination system.
- Migration agent processing commences at 400 , whereupon the migration agent receives a request from administrator 415 to migrate a virtual machine executing on a source system (step 410 ).
- the virtual machine transmits stateful offload data packets (e.g., RDMA) that traverse through a network adapter, which utilizes hardware state data to process the data packets.
- stateful offload data packets e.g., RDMA
- the migration agent identifies a source network adapter through which the virtual machine's data packets traverse (e.g., included in request or identified via a candidate table). A determination is made as to whether the network adapter's hardware state is movable (e.g., the adapter supports extraction, decision 430 ). If the network adapter's hardware state is not movable, decision 430 branches to the “No” branch, whereupon the migration agent returns an error to administrator 415 at step 435 , and ends at step 438 .
- a source network adapter through which the virtual machine's data packets traverse (e.g., included in request or identified via a candidate table). A determination is made as to whether the network adapter's hardware state is movable (e.g., the adapter supports extraction, decision 430 ). If the network adapter's hardware state is not movable, decision 430 branches to the “No” branch, whereupon the migration agent returns an error to administrator 415 at step 435 , and ends at step 438
- decision 430 branches to the “Yes” branch, whereupon the migration agent proceeds through a series of steps to discover a suitable destination system whose network adapter supports the hardware state data utilized by the source network adapter (pre-defined process block 440 , see FIG. 5 and corresponding text for further details).
- the migration agent issues an extraction command to the source network adapter (e.g., through its device driver or hypervisor) to quiesce I/O and memory activity, and copy the hardware state data to a shared memory location (see FIG. 6 and corresponding text for further details).
- the source network adapter e.g., through its device driver or hypervisor
- source system 105 sends an indication to the migration agent (received at step 570 ) that the hardware state data has been copied to shared memory.
- the migration agent sends a migration request to source system and destination system to establish a connection and migrate the virtual machine (includes the hardware state data) from source system 105 to destination system 115 (pre-defined process block 480 , see FIG. 7 and corresponding text for further details).
- destination system 115 Once migrated, destination system 115 's hypervisor configures its' destination network adapter according to the migrated hardware state data.
- the virtual machine resumes operation on destination system 115 at step 490 , and migration agent processing ends at 495 .
- FIG. 5 is an exemplary flowchart showing steps taken in a migration agent discovering a suitable destination system that includes a compatible host and an equivalent network adapter.
- an equivalent network adapter is an adapter that is able to utilize the source network adapter's hardware state data in its native hardware format (e.g., address translations are not required).
- Destination discovery processing commences at 500 , whereupon the migration agent (included in the distributed policy service) identifies system requirements corresponding to a migrating virtual machine at step 520 .
- the virtual machine system requirements may include processing speed, memory requirements, network bandwidth requirements, etc.
- the migration agent accesses candidate table 525 and identifies compatible host systems that meet the host system requirements.
- a host system is compatible when it is able to meet or exceed the virtual machine system requirements.
- a virtual machine may require 4 GB of system memory and a host system may be able to provide 6 GB of system memory to the virtual machine.
- the migration agent identifies the source network adapter's native hardware properties included in candidate table 525 .
- the source network adapter's native hardware properties include the source network adapter's device id, firmware version, and other relevant adapter properties.
- the migration agent identifies one or more network adapters utilized by the compatible host systems (from step 530 ) that are equivalent to the source network adapter's native hardware properties (step 550 ).
- the migration agent selects one of the equivalent network adapters at step 560 .
- the migration agent sends a message to the network administrator and allows the network administrator to select one of the equivalent network adapters. Processing returns at 580 .
- FIG. 6 is an exemplary flowchart showing steps taking in a host system preparing a virtual machine for migration.
- Source system processing commences at 600 , whereupon the source system receives a state extraction command from migration agent 160 to migrate a particular virtual machine executing on the source host system (step 610 ).
- the source system e.g., via a device driver or hypervisor
- the source system instructs source network adapter 150 to extract hardware state data pertaining to the migrating virtual machine and, at step 640 , the source system copies the hardware state data to shared memory 142 , which is system memory and part of the virtual machine that migrates to the destination system.
- the source system informs migration agent 160 that the virtual machine is ready for migration at step 650 , and source system processing ends at 660 .
- FIG. 7 is an exemplary flowchart showing steps taken in migrating a virtual machine from a source system to a destination system.
- Source system processing commences at 700 , whereupon the source system receives a request from migration agent 160 to migrate the source system to the destination system.
- Destination system processing commences, whereupon the destination system receives a corresponding request at 755 .
- the source system's hypervisor establishes a connection with the destination system's hypervisor and requests the destination system to reserve resources for the migrating virtual machine.
- the request includes remote adapter configuration parameters, which indicate a memory block starting address in the migrating virtual machine's shared memory where hardware state data is stored (step 710 ).
- the destination system's hypervisor allocates space for the virtual machine.
- the hypervisors migrate the virtual machine from the source system to the destination system and, in one embodiment, the destination system verifies the migration, such as by a checksum computation.
- the destination system's hypervisor allocates a device driver to the migrated logical partition at step 770 in order for the virtual machine to communicate with the destination network adapter.
- the destination system's hypervisor sends a “State Insert” command to the destination network adapter, which instructs the destination network adapter to retrieve the hardware state data from shared memory at the memory block starting address and configure the destination network adapter accordingly.
- the memory block starting addresses is included in the resource request sent by the source system's hypervisor (step 710 discussed above).
- the source hypervisor sends a separate message to the destination hypervisor that includes the memory block starting address. Once configured, the destination hypervisor sends a migration acknowledgement to the source hypervisor at step 789 , and destination hypervisor processing ends at 790 .
- the source hypervisor receives the successful migration acknowledgement at step 720 , and frees the resources (virtual machine, device driver, shared memory, etc.) at the source system at step 730 .
- Source hypervisor processing ends at 735 .
- FIG. 8 is an exemplary diagram showing a network adapter tracking and storing hardware state data for modules executing on a virtual machine.
- Virtual machine 135 utilizes modules 800 - 850 to send/receive stateful offload data packets to/from other virtual machines through source network adapter 150 .
- Each of modules 800 - 850 has a “state” on source network adapter 150 , which are stored in hardware state data 152 .
- hardware state data 152 includes a grouping of state information that represents a connection/datagram state.
- hardware state data 152 may include the following:
- hardware state data 152 is copied to a shared memory area and migrates with virtual machine 135 over to the destination system.
- the destination system configures its destination network adapter according to the migrated hardware state data 152 .
- source network adapter 150 may manage thousands of hardware state data 152 's, each corresponding to a different virtual machine. In this embodiment, only hardware state data 152 corresponding to a migrating virtual machine is copied to the destination system.
- FIG. 9 is an exemplary diagram showing the migration of hardware state data from a source network adapter to a destination network adapter.
- Source network adapter 150 utilizes hardware state data 152 to send stateful offload data packets from a source virtual machine to a destination virtual machine.
- hardware state data 152 is copied to shared memory 142 at memory block starting address 800 .
- virtual machine 135 is copied to a destination system as virtual machine 175
- hardware state date 152 copies over in its native hardware format and is still stored at memory block starting address 800 on shared memory 182 .
- hardware state data 152 is copied to destination network adapter 190 in its native hardware format due to the fact that destination network adapter 190 is equivalent to source network adapter 150 .
- destination network adapter 190 is equivalent to source network adapter 150 , destination network adapter 190 utilizes hardware state data in its native format, thus address translations are not required.
- FIG. 10 is an exemplary diagram showing a distributed policy service accessing a candidate table storage area to identify a suitable destination system.
- Migration agent 160 interfaces with local network policy server to identify a suitable destination system.
- local network policy server 1000 manages policies and physical path translations pertaining to the source system's overlay network (e.g., overlay network environment 100 ).
- policy servers for different overlay networks are co-located and differentiate policy requests from different migration agents according to their corresponding overlay network identifier.
- Distributed policy service 165 is structured hierarchally and, when local network policy server 1000 is not able to locate a suitable destination system, local network policy server 1000 queries root policy server 1010 to search for an suitable destination system. In turn, root policy server 1010 accesses candidate table store 1015 and send a suitable destination system identifier to local network policy server 1000 , which sends it to migration agent 160 . In one embodiment, root policy server 1010 may send local network policy server 1000 a message to query local network policy server 1030 for a suitable destination system, which manages other host systems than what local network policy server 1000 manages.
- FIG. 11 is an exemplary diagram showing virtual network abstractions that are overlayed onto a physical network space.
- Virtual networks 1100 are part of an overlay network environment and include policies (e.g., policies 1103 - 1113 ) that provide an end-to-end virtual connectivity between virtual machines (e.g., virtual machines 1102 - 1110 ).
- Each of virtual networks 110 corresponds to a unique virtual identifier, which allows concurrent operation of multiple virtual networks over physical space 1120 .
- some of virtual networks 1100 may include a portion of virtual machines 1102 - 1110 , while other virtual networks 1100 may include different virtual machines and different policies than what is shown in FIG. 11 .
- policies 1103 - 1113 define how different virtual machines communicate with each other (or with external networks).
- a policy may define quality of service (QoS) requirements between a set of virtual machines; access controls associated with particular virtual machines; or a set of virtual or physical appliances (equipment) to traverse when sending or receiving data.
- QoS quality of service
- some appliances may include accelerators such as compression, IP Security (IPSec), SSL, or security appliances such as a firewall or an intrusion detection system.
- IPSec IP Security
- SSL Secure Sockets
- security appliances such as a firewall or an intrusion detection system.
- a policy may be configured to disallow communication between the source virtual machine and the destination virtual machine.
- Virtual networks 1100 are logically overlayed onto physical space 1120 , which includes physical entities 1135 through 1188 (hosts, switches, and routers). While the way in which a policy is enforced in the system affects and depends on physical space 1120 , virtual networks 1100 are more dependent upon logical descriptions in the policies. As such, multiple virtual networks 1100 may be overlayed onto physical space 1120 . As can be seen, physical space 1120 is divided into subnet X 1125 and subnet Y 1130 . The subnets are joined via routers 1135 and 1140 . Virtual networks 1100 are independent of physical constraints of physical space 1120 (e.g., L2 layer constraints within a subnet). Therefore, a virtual network may include physical entities included in both subnet X 1125 and subnet Y 1130 .
- the virtual network abstractions support address independence between different virtual networks 1100 .
- two different virtual machines operating in two different virtual networks may have the same IP address.
- the virtual network abstractions support deploying virtual machines, which belong to the same virtual networks, onto different hosts that are located in different physical subnets (includes switches and/or routers between the physical entities).
- virtual machines belonging to different virtual networks may be hosted on the same physical host.
- the virtual network abstractions support virtual machine migration anywhere in a data center without changing the virtual machine's network address and losing its network connection.
- FIG. 12 illustrates information handling system 1200 , which is a simplified example of a computer system capable of performing the computing operations described herein.
- Information handling system 1200 includes one or more processors 1210 coupled to processor interface bus 1212 .
- Processor interface bus 1212 connects processors 1210 to Northbridge 1215 , which is also known as the Memory Controller Hub (MCH).
- Northbridge 1215 connects to system memory 1220 and provides a means for processor(s) 1210 to access the system memory.
- Graphics controller 1225 also connects to Northbridge 1215 .
- PCI Express bus 1218 connects Northbridge 1215 to graphics controller 1225 .
- Graphics controller 1225 connects to display device 1230 , such as a computer monitor.
- Northbridge 1215 and Southbridge 1235 connect to each other using bus 1219 .
- the bus is a Direct Media Interface (DMI) bus that transfers data at high speeds in each direction between Northbridge 1215 and Southbridge 1235 .
- a Peripheral Component Interconnect (PCI) bus connects the Northbridge and the Southbridge.
- Southbridge 1235 also known as the I/O Controller Hub (ICH) is a chip that generally implements capabilities that operate at slower speeds than the capabilities provided by the Northbridge.
- Southbridge 1235 typically provides various busses used to connect various components. These busses include, for example, PCI and PCI Express busses, an ISA bus, a System Management Bus (SMBus or SMB), and/or a Low Pin Count (LPC) bus.
- PCI and PCI Express busses an ISA bus
- SMB System Management Bus
- LPC Low Pin Count
- the LPC bus often connects low-bandwidth devices, such as boot ROM 1296 and “legacy” I/O devices (using a “super I/O” chip).
- the “legacy” I/O devices ( 1298 ) can include, for example, serial and parallel ports, keyboard, mouse, and/or a floppy disk controller.
- the LPC bus also connects Southbridge 1235 to Trusted Platform Module (TPM) 1295 .
- TPM Trusted Platform Module
- Other components often included in Southbridge 1235 include a Direct Memory Access (DMA) controller, a Programmable Interrupt Controller (PIC), and a storage device controller, which connects Southbridge 1235 to nonvolatile storage device 1285 , such as a hard disk drive, using bus 1284 .
- DMA Direct Memory Access
- PIC Programmable Interrupt Controller
- storage device controller which connects Southbridge 1235 to nonvolatile storage device 1285 , such as a hard disk drive, using bus 1284 .
- ExpressCard 1255 is a slot that connects hot-pluggable devices to the information handling system.
- ExpressCard 1255 supports both PCI Express and USB connectivity as it connects to Southbridge 1235 using both the Universal Serial Bus (USB) the PCI Express bus.
- Southbridge 1235 includes USB Controller 1240 that provides USB connectivity to devices that connect to the USB. These devices include webcam (camera) 1250 , infrared (IR) receiver 1248 , keyboard and trackpad 1244 , and Bluetooth device 1246 , which provides for wireless personal area networks (PANs).
- webcam camera
- IR infrared
- keyboard and trackpad 1244 keyboard and trackpad 1244
- Bluetooth device 1246 which provides for wireless personal area networks (PANs).
- USB Controller 1240 also provides USB connectivity to other miscellaneous USB connected devices 1242 , such as a mouse, removable nonvolatile storage device 1245 , modems, network cards, ISDN connectors, fax, printers, USB hubs, and many other types of USB connected devices. While removable nonvolatile storage device 1245 is shown as a USB-connected device, removable nonvolatile storage device 1245 could be connected using a different interface, such as a Firewire interface, etcetera.
- Wireless Local Area Network (LAN) device 1275 connects to Southbridge 1235 via the PCI or PCI Express bus 1272 .
- LAN device 1275 typically implements one of the IEEE 802.11 standards of over-the-air modulation techniques that all use the same protocol to wirelessly communicate between information handling system 1200 and another computer system or device.
- Optical storage device 1290 connects to Southbridge 1235 using Serial ATA (SATA) bus 1288 .
- Serial ATA adapters and devices communicate over a high-speed serial link.
- the Serial ATA bus also connects Southbridge 1235 to other forms of storage devices, such as hard disk drives.
- Audio circuitry 1260 such as a sound card, connects to Southbridge 1235 via bus 1258 .
- Audio circuitry 1260 also provides functionality such as audio line-in and optical digital audio in port 1262 , optical digital output and headphone jack 1264 , internal speakers 1266 , and internal microphone 1268 .
- Ethernet controller 1270 connects to Southbridge 1235 using a bus, such as the PCI or PCI Express bus. Ethernet controller 1270 connects information handling system 1200 to a computer network, such as a Local Area Network (LAN), the Internet, and other public and private computer networks.
- LAN Local Area Network
- the Internet and other public and private computer networks.
- an information handling system may take many forms.
- an information handling system may take the form of a desktop, server, portable, laptop, notebook, or other form factor computer or data processing system.
- an information handling system may take other form factors such as a personal digital assistant (PDA), a gaming device, ATM machine, a portable telephone device, a communication device or other devices that include a processor and memory.
- PDA personal digital assistant
- the Trusted Platform Module (TPM 1295 ) shown in FIG. 12 and described herein to provide security functions is but one example of a hardware security module (HSM). Therefore, the TPM described and claimed herein includes any type of HSM including, but not limited to, hardware security devices that conform to the Trusted Computing Groups (TCG) standard, and entitled “Trusted Platform Module (TPM) Specification Version 1.2.”
- TCG Trusted Computing Groups
- TPM Trusted Platform Module
- the TPM is a hardware security subsystem that may be incorporated into any number of information handling systems, such as those outlined in FIG. 13 .
- FIG. 13 provides an extension of the information handling system environment shown in FIG. 12 to illustrate that the methods described herein can be performed on a wide variety of information handling systems that operate in a networked environment.
- Types of information handling systems range from small handheld devices, such as handheld computer/mobile telephone 1310 to large mainframe systems, such as mainframe computer 1370 .
- handheld computer 1310 include personal digital assistants (PDAs), personal entertainment devices, such as MP3 players, portable televisions, and compact disc players.
- PDAs personal digital assistants
- Other examples of information handling systems include pen, or tablet, computer 1320 , laptop, or notebook, computer 1330 , workstation 1340 , personal computer system 1350 , and server 1360 .
- Other types of information handling systems that are not individually shown in FIG. 13 are represented by information handling system 1380 .
- the various information handling systems can be networked together using computer network 1300 .
- Types of computer network that can be used to interconnect the various information handling systems include Local Area Networks (LANs), Wireless Local Area Networks (WLANs), the Internet, the Public Switched Telephone Network (PSTN), other wireless networks, and any other network topology that can be used to interconnect the information handling systems.
- Many of the information handling systems include nonvolatile data stores, such as hard drives and/or nonvolatile memory.
- Some of the information handling systems shown in FIG. 13 depicts separate nonvolatile data stores (server 1360 utilizes nonvolatile data store 1365 , mainframe computer 1370 utilizes nonvolatile data store 1375 , and information handling system 1380 utilizes nonvolatile data store 1385 ).
- the nonvolatile data store can be a component that is external to the various information handling systems or can be internal to one of the information handling systems.
- removable nonvolatile storage device 1245 can be shared among two or more information handling systems using various techniques, such as connecting the removable nonvolatile storage device 1245 to a USB port or other connector of the information handling systems.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Hardware Redundancy (AREA)
- Computer And Data Communications (AREA)
Abstract
An approach is provided in which a migration agent receives a message to migrate a virtual machine from a first system to a second system. The first system extracts hardware state data stored in a native format from a memory area located on first system's network adapter. The hardware state data is utilized by the first system's network adapter to process data packets generated by the virtual machine. Next, the virtual machine is migrated to the second system, which includes copying the extracted hardware state data from the first system to the second system. In turn, the second system configures a corresponding second network adapter by writing the copied hardware state data to a memory located on the second network adapter.
Description
- The present disclosure relates to migrating a virtual machine, which generates stateful offload data packets, from a first system to a second system. More particularly, the present disclosure relates to extracting hardware state data from a source network adapter and copying the extracted hardware state data to a destination system during the migration.
- Modern communication network adapters support “stateful” offload data transmission formats in which the network adapters perform particular processing tasks in order to reduce a host system's processing load. Typical stateful offload formats include Remote Direct Memory Access (RDMA), Internet Wide RDMA Protocol (iWARP), Infiniband (IB), and TCP Offload Engine (TOE). In order to support the stateful offload formats, the network adapters restrict the “state” for any given virtual machine connection to the context of the network adapter's instance corresponding to the virtual machine. Stateful offload information that represents this context includes hardware state data that describes hardware properties on a per virtual machine basis, such as information corresponding to connections, registers, memory registrations, structures used to communicate with the virtual machine (Queue Pairs, Completion Queues, etc.), and other miscellaneous data structures, such as address resolution protocol (ARP) tables.
- According to one embodiment of the present disclosure, an approach is provided in which a migration agent receives a message to migrate a virtual machine from a first system to a second system. The first system extracts hardware state data stored in a native format from a memory area located on first system's network adapter. The hardware state data is utilized by the first system's network adapter to process data packets generated by the virtual machine. Next, the virtual machine is migrated to the second system, which includes copying the extracted hardware state data from the first system to the second system. In turn, the second system configures a corresponding second network adapter by writing the copied hardware state data to a memory located on the second network adapter.
- The foregoing is a summary and thus contains, by necessity, simplifications, generalizations, and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting. Other aspects, inventive features, and advantages of the present disclosure, as defined solely by the claims, will become apparent in the non-limiting detailed description set forth below.
- The present disclosure may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings, wherein:
-
FIG. 1 is an exemplary diagram showing a migration agent migrating a logical partition, which includes a virtual machine and native network adapter hardware state data, from a source system to a destination system; -
FIG. 2 is an exemplary diagram showing a graphical representation of discovering an suitable destination system; -
FIG. 3 is an exemplary candidate table that includes host properties and corresponding network adapter property table entries; -
FIG. 4 is an exemplary flowchart showing steps taken in discovering a destination system and migrating a virtual machine to the destination system; -
FIG. 5 is an exemplary flowchart showing steps taken in discovering a suitable destination system that includes a compatible host and an equivalent network adapter compared with a source system; -
FIG. 6 is an exemplary flowchart showing steps taking in a host system preparing a virtual machine for migration; -
FIG. 7 is an exemplary flowchart showing steps taken in migrating a logical partition from a source system to a destination system; -
FIG. 8 is an exemplary diagram showing a network adapter tracking and storing hardware state data for modules executing on a virtual machine; -
FIG. 9 is an exemplary diagram showing the migration of hardware state data from a source network adapter to a destination network adapter; -
FIG. 10 is an exemplary diagram showing a distributed policy service accessing a candidate table storage area to identify a suitable destination system; -
FIG. 11 is an exemplary diagram showing virtual network abstractions that are overlayed onto a physical network space; -
FIG. 12 is an exemplary block diagram of a data processing system in which the methods described herein can be implemented; and -
FIG. 13 provides an extension of the information handling system environment shown inFIG. 12 to illustrate that the methods described herein can be performed on a wide variety of information handling systems which operate in a networked environment. - The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
- As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
- Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- Aspects of the present disclosure are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- The following detailed description will generally follow the summary of the disclosure, as set forth above, further explaining and expanding the definitions of the various aspects and embodiments of the disclosure as necessary.
- The present disclosure describes a method for migrating a virtual machine from a source system to a destination system. The migration includes extracting hardware state data from a source network adapter corresponding to the virtual machine, and copying the hardware state data to a destination network adapter included in the destination system. As such, a system administrator has flexibility to migrate the virtual machine to the destination system as required, such as due to resolve security issues or network bandwidth issues.
-
FIG. 1 is an exemplary diagram showing a migration agent migrating a virtual machine, which includes native network adapter hardware state data, from a source system to a destination system. Overlaynetwork environment 100 overlays onto a physical network and utilizes logical policies to send data between virtual machines over virtual networks. As such, the virtual networks are independent from physical topology constraints of the physical network (seeFIG. 11 and corresponding text for further details). -
Overlay network environment 100 includessource system 105.Source system 105 includeshost 110 andsource network adapter 150.Host 110 includeshypervisor 145, which provisionsvirtual machine 135 anddevice driver 140.Virtual machine 135 utilizesdevice driver 140 to send stateful offload data packets to sourcenetwork adapter 150. For example, the stateful offload data packets may adhere to a stateful offload format such as Remote Direct Memory Access (RDMA), Internet Wide RDMA Protocol (iWARP), Infiniband (IB), or TCP Offload Engine (TOE). - In turn,
source network adapter 150 processes the data packets utilizinghardware state data 152 and transmits the data packets to a destination virtual machine overoverlay network environment 100.Hardware state data 152 includes stateful information that representssource network adapter 150's context, such as data pertaining to connections and structures used to communicate with virtual machine 135 (e.g., queue pairs, completion queues, etc.), and may also include register information, memory registrations, and other miscellaneous data structures (e.g., ARP tables, sequence numbers, retransmission information, etc.). - In one embodiment,
hardware state data 152 includes Layer 4 (of the OSI Model) connection state information that allowssource network adapter 150 to perform retransmission and packet acknowledgements, which alleviateshost 110 from performing such menial tasks. For example, iWARP provides RDMA capability over a standard Ethernet fabric, which utilizes application buffers that are mapped to an underlying Ethernet adapter. When communication is initiated, a connection is made with the network adapter that initiates a TCP connection. Once active, data on the application's outgoing buffers are encapsulated by the network adapter as TCP segments as packets are built. - A system administrator may wish to migrate
virtual machine 135 fromsource system 105 to a different system, such as for security purposes or network bandwidth management purposes). As such, the system administrator may send a migration command to migration agent 160 (included in distributed policy service 165), which is responsible for discovering a suitable destination system that includes a compatible host and an equivalent network adapter that supportsoverlay network environment 100. In one embodiment, a compatible host is one that satisfies a migrating virtual machine's system requirements, such as CPU requirements, memory requirements, bandwidth requirements, etc. In one embodiment, an equivalent network adapter is one that corresponds to the same vendor identifier and the same revision identifier assource network adapter 150. -
Migration agent 160 proceeds through a series of discovery steps to identifydestination system 115 as a suitable destination system. In one embodiment,migration agent 160 utilizes a candidate table, which includes host properties and network adapter properties, for which to identify the suitable destination system (seeFIGS. 3 , 5, and corresponding text for further details). In this embodiment,migration agent 160 determines that host 120 supportsvirtual machine 135's system requirements anddestination network adapter 190 is equivalent to source network adapter 150 (e.g., includes matching device id, firmware version, and other relevant adapter attributes). - In order to migrate
virtual machine 135,hardware state data 152 must also be migrated.Hardware state data 152, however, is partially or completely opaque todevice driver 140 andvirtual machine 135. As such,migration agent 160 indicates to source network adapter 150 (throughdevice driver 140,hypervisor 145, or other driving agent) to extracthardware state data 152.Source network adapter 150 quiesces I/O and memory activity to avoid state changes or corruption during the extraction process, and copieshardware state data 152 viadevice driver 140 to sharedmemory 142 at a specified memory block starting address. The memory block starting address may be negotiated as part of its initialization or provided as a parameter in the extraction command to sourcenetwork adapter 150. -
Migration agent 160 sends a migration request to sourcesystem 105 anddestination system 115 to migratevirtual machine 135. In turn,hypervisors virtual machine 175 and sharedmemory 182. In addition,hypervisor 185 allocatesdevice driver 180 to logical partition 170, and sends a state insert command todestination network adapter 190. The state insert command instructsdestination network adapter 190 to retrieve the hardware state data from sharedmemory 182 at the memory block starting address, and load hardware state data 192 ontonetwork adapter 190. As a result,hardware state data 152 maintains its native form when stored indestination network adapter 190, thus negating address translation steps. - In one embodiment,
destination network adapter 190 performs a checksum to validate the hardware state data. In another embodiment,destination network adapter 190 may utilize a header or individual flags to efficiently set the context. In yet another embodiment, whensource network adapter 150 remains active during the migration,migration agent 160 may facilitate one or more transactions betweensource network adapter 150 anddestination network adapter 190 to verify the equivalence of their states. -
FIG. 2 is an exemplary diagram showing a graphical representation of discovering a suitable destination system. In one embodiment,migration agent 160 iteratively selects a suitable destination system based upon available hosts, compatible hosts, and equivalent network adapters. In another embodiment,migration agent 160 uses a candidate table, such as that shown inFIG. 3 , to perform such iteration steps. - The migration agent identifies
available hosts 220 included inoverlay network environment 100.Available hosts 220 include hosts 250-290, each utilizing various network adapters. The example inFIG. 2 shows that the migration agent determines that hosts 250-268 do not satisfy host requirements of the migrating virtual machine (e.g., not enough memory or bandwidth availability). As such, the migration agent identifies hosts 272-290 as “compatible” hosts 230, which meet or exceed host requirements of the migrating virtual machine. - Next, the migration agent analyzes
network adapters compatible hosts 230 in order to identify a network adapter that is equivalent to the network adapter utilized by the migrating virtual machine. In one embodiment, an equivalent network adapter is one that matches the migrating virtual machine's network adapter in both device ID and vendor ID. The example shown inFIG. 2 shows thatnetwork adapter 295 is equivalent to the migrating virtual machine's network adapter. As such, the migration agent sends a message to the source and destination systems' hypervisors to establish a connection and migrate the virtual machine from the source system to the destination system. -
FIG. 3 is an exemplary candidate table that includes host properties and corresponding network adapter property table entries. A migration agent (as part of a distributed policy service) manages candidate table 300 in order to track host requirements and network adapter requirements for virtual machines that execute stateful offload data transmissions. In one embodiment, a local distributed policy server may manage candidate table 300, which would include table entries at a local virtual network level. In another embodiment, a root distributed policy server may manage candidate table 300, which would include table entries at a global overlay network environment level (seeFIG. 10 and corresponding text for further details). - Candidate table 300 includes a list of table entries, which include host names (column 310) and host properties (column 320). For example, a host system may provision a particular amount of processing power, memory, and bandwidth to a virtual machine. In one embodiment,
column 320 may include minimum, nominal, and/or maximum host properties. - The table entries also include network adapter information for network adapters utilized by corresponding host systems.
Column 330 includes network adapter identifiers andcolumn 340 includes network adapter properties. The network adapter properties, in one embodiment, identify the network adapter's vendor ID and device ID. As such, the migration agent may discover an equivalent (matching) network adapter in order to migrate hardware state data in its native format to a different network adapter. -
FIG. 4 is an exemplary flowchart showing steps taken in discovering a destination system and migrating a virtual machine from a source system to the destination system. Migration agent processing commences at 400, whereupon the migration agent receives a request fromadministrator 415 to migrate a virtual machine executing on a source system (step 410). The virtual machine transmits stateful offload data packets (e.g., RDMA) that traverse through a network adapter, which utilizes hardware state data to process the data packets. - At
step 420, the migration agent identifies a source network adapter through which the virtual machine's data packets traverse (e.g., included in request or identified via a candidate table). A determination is made as to whether the network adapter's hardware state is movable (e.g., the adapter supports extraction, decision 430). If the network adapter's hardware state is not movable,decision 430 branches to the “No” branch, whereupon the migration agent returns an error toadministrator 415 atstep 435, and ends atstep 438. - On the other hand, if the network adapter's hardware state is movable,
decision 430 branches to the “Yes” branch, whereupon the migration agent proceeds through a series of steps to discover a suitable destination system whose network adapter supports the hardware state data utilized by the source network adapter (pre-defined process block 440, seeFIG. 5 and corresponding text for further details). - At
step 450, the migration agent issues an extraction command to the source network adapter (e.g., through its device driver or hypervisor) to quiesce I/O and memory activity, and copy the hardware state data to a shared memory location (seeFIG. 6 and corresponding text for further details). - In turn,
source system 105 sends an indication to the migration agent (received at step 570) that the hardware state data has been copied to shared memory. The migration agent sends a migration request to source system and destination system to establish a connection and migrate the virtual machine (includes the hardware state data) fromsource system 105 to destination system 115 (pre-defined process block 480, seeFIG. 7 and corresponding text for further details). Once migrated,destination system 115's hypervisor configures its' destination network adapter according to the migrated hardware state data. The virtual machine resumes operation ondestination system 115 atstep 490, and migration agent processing ends at 495. -
FIG. 5 is an exemplary flowchart showing steps taken in a migration agent discovering a suitable destination system that includes a compatible host and an equivalent network adapter. In one embodiment, an equivalent network adapter is an adapter that is able to utilize the source network adapter's hardware state data in its native hardware format (e.g., address translations are not required). - Destination discovery processing commences at 500, whereupon the migration agent (included in the distributed policy service) identifies system requirements corresponding to a migrating virtual machine at
step 520. For example, the virtual machine system requirements may include processing speed, memory requirements, network bandwidth requirements, etc. Atstep 530, the migration agent accesses candidate table 525 and identifies compatible host systems that meet the host system requirements. In one embodiment, a host system is compatible when it is able to meet or exceed the virtual machine system requirements. For example, a virtual machine may require 4 GB of system memory and a host system may be able to provide 6 GB of system memory to the virtual machine. - At
step 540, the migration agent identifies the source network adapter's native hardware properties included in candidate table 525. In one embodiment, the source network adapter's native hardware properties include the source network adapter's device id, firmware version, and other relevant adapter properties. Next, the migration agent identifies one or more network adapters utilized by the compatible host systems (from step 530) that are equivalent to the source network adapter's native hardware properties (step 550). - In turn, the migration agent selects one of the equivalent network adapters at
step 560. In one embodiment, the migration agent sends a message to the network administrator and allows the network administrator to select one of the equivalent network adapters. Processing returns at 580. -
FIG. 6 is an exemplary flowchart showing steps taking in a host system preparing a virtual machine for migration. Source system processing commences at 600, whereupon the source system receives a state extraction command frommigration agent 160 to migrate a particular virtual machine executing on the source host system (step 610). Atstep 620, the source system (e.g., via a device driver or hypervisor) quiesces I/O and memory activity onsource network adapter 150 in order to avoid state changes or corruption during the migration of the virtual machine. - At
step 630, the source system instructssource network adapter 150 to extract hardware state data pertaining to the migrating virtual machine and, atstep 640, the source system copies the hardware state data to sharedmemory 142, which is system memory and part of the virtual machine that migrates to the destination system. The source system informsmigration agent 160 that the virtual machine is ready for migration atstep 650, and source system processing ends at 660. -
FIG. 7 is an exemplary flowchart showing steps taken in migrating a virtual machine from a source system to a destination system. Source system processing commences at 700, whereupon the source system receives a request frommigration agent 160 to migrate the source system to the destination system. Destination system processing commences, whereupon the destination system receives a corresponding request at 755. - At
step 710, the source system's hypervisor establishes a connection with the destination system's hypervisor and requests the destination system to reserve resources for the migrating virtual machine. In one embodiment, the request includes remote adapter configuration parameters, which indicate a memory block starting address in the migrating virtual machine's shared memory where hardware state data is stored (step 710). - The destination system's hypervisor, at
step 760, allocates space for the virtual machine. Atsteps - At
step 775, the destination system's hypervisor sends a “State Insert” command to the destination network adapter, which instructs the destination network adapter to retrieve the hardware state data from shared memory at the memory block starting address and configure the destination network adapter accordingly. In one embodiment, the memory block starting addresses is included in the resource request sent by the source system's hypervisor (step 710 discussed above). In another embodiment, the source hypervisor sends a separate message to the destination hypervisor that includes the memory block starting address. Once configured, the destination hypervisor sends a migration acknowledgement to the source hypervisor at step 789, and destination hypervisor processing ends at 790. - The source hypervisor receives the successful migration acknowledgement at
step 720, and frees the resources (virtual machine, device driver, shared memory, etc.) at the source system atstep 730. Source hypervisor processing ends at 735. -
FIG. 8 is an exemplary diagram showing a network adapter tracking and storing hardware state data for modules executing on a virtual machine.Virtual machine 135 utilizes modules 800-850 to send/receive stateful offload data packets to/from other virtual machines throughsource network adapter 150. Each of modules 800-850 has a “state” onsource network adapter 150, which are stored inhardware state data 152. In one embodiment,hardware state data 152 includes a grouping of state information that represents a connection/datagram state. For example,hardware state data 152 may include the following: -
- Protection Domain grouping of resources
- Protection Domain device statistics
- Queue Pair Send Queue Hardware producer index
- Queue Pair Send Queue Software consumer index
- Queue Pair Receive Queue Hardware producer index
- Queue Pair Receive Queue Software consumer index
- Associated Memory Regions
- Associated Address Handles
- Completion Queue Hardware producer index
- Completion Queue Software consumer index
- Completion Queue device statistics
- Virtual to Logical/Bus address mappings
- When
virtual machine 135 migrates to a destination system,hardware state data 152 is copied to a shared memory area and migrates withvirtual machine 135 over to the destination system. In turn, the destination system configures its destination network adapter according to the migratedhardware state data 152. In one embodiment,source network adapter 150 may manage thousands ofhardware state data 152's, each corresponding to a different virtual machine. In this embodiment, onlyhardware state data 152 corresponding to a migrating virtual machine is copied to the destination system. -
FIG. 9 is an exemplary diagram showing the migration of hardware state data from a source network adapter to a destination network adapter.Source network adapter 150 utilizeshardware state data 152 to send stateful offload data packets from a source virtual machine to a destination virtual machine. During migration todestination network adapter 190,hardware state data 152 is copied to sharedmemory 142 at memoryblock starting address 800. In turn, whenvirtual machine 135 is copied to a destination system asvirtual machine 175,hardware state date 152 copies over in its native hardware format and is still stored at memoryblock starting address 800 on sharedmemory 182. In turn,hardware state data 152 is copied todestination network adapter 190 in its native hardware format due to the fact thatdestination network adapter 190 is equivalent tosource network adapter 150. - Due to the fact that
destination network adapter 190 is equivalent tosource network adapter 150,destination network adapter 190 utilizes hardware state data in its native format, thus address translations are not required. -
FIG. 10 is an exemplary diagram showing a distributed policy service accessing a candidate table storage area to identify a suitable destination system.Migration agent 160 interfaces with local network policy server to identify a suitable destination system. In one embodiment, localnetwork policy server 1000 manages policies and physical path translations pertaining to the source system's overlay network (e.g., overlay network environment 100). In another embodiment, policy servers for different overlay networks are co-located and differentiate policy requests from different migration agents according to their corresponding overlay network identifier. - Distributed
policy service 165 is structured hierarchally and, when localnetwork policy server 1000 is not able to locate a suitable destination system, localnetwork policy server 1000 queriesroot policy server 1010 to search for an suitable destination system. In turn,root policy server 1010 accessescandidate table store 1015 and send a suitable destination system identifier to localnetwork policy server 1000, which sends it tomigration agent 160. In one embodiment,root policy server 1010 may send local network policy server 1000 a message to query localnetwork policy server 1030 for a suitable destination system, which manages other host systems than what localnetwork policy server 1000 manages. -
FIG. 11 is an exemplary diagram showing virtual network abstractions that are overlayed onto a physical network space. Virtual networks 1100 are part of an overlay network environment and include policies (e.g., policies 1103-1113) that provide an end-to-end virtual connectivity between virtual machines (e.g., virtual machines 1102-1110). Each ofvirtual networks 110 corresponds to a unique virtual identifier, which allows concurrent operation of multiple virtual networks overphysical space 1120. As those skilled in the art can appreciate, some of virtual networks 1100 may include a portion of virtual machines 1102-1110, while other virtual networks 1100 may include different virtual machines and different policies than what is shown inFIG. 11 . - When a “source” virtual machine sends data to a “destination” virtual machine, a policy corresponding to the two virtual machines describes a logical path on which the data travels (e.g., through a firewall, through an accelerator, etc.). In other words, policies 1103-1113 define how different virtual machines communicate with each other (or with external networks). For example, a policy may define quality of service (QoS) requirements between a set of virtual machines; access controls associated with particular virtual machines; or a set of virtual or physical appliances (equipment) to traverse when sending or receiving data. In addition, some appliances may include accelerators such as compression, IP Security (IPSec), SSL, or security appliances such as a firewall or an intrusion detection system. In addition, a policy may be configured to disallow communication between the source virtual machine and the destination virtual machine.
- Virtual networks 1100 are logically overlayed onto
physical space 1120, which includes physical entities 1135 through 1188 (hosts, switches, and routers). While the way in which a policy is enforced in the system affects and depends onphysical space 1120, virtual networks 1100 are more dependent upon logical descriptions in the policies. As such, multiple virtual networks 1100 may be overlayed ontophysical space 1120. As can be seen,physical space 1120 is divided into subnet X 1125 andsubnet Y 1130. The subnets are joined viarouters 1135 and 1140. Virtual networks 1100 are independent of physical constraints of physical space 1120 (e.g., L2 layer constraints within a subnet). Therefore, a virtual network may include physical entities included in both subnet X 1125 andsubnet Y 1130. - In one embodiment, the virtual network abstractions support address independence between different virtual networks 1100. For example, two different virtual machines operating in two different virtual networks may have the same IP address. As another example, the virtual network abstractions support deploying virtual machines, which belong to the same virtual networks, onto different hosts that are located in different physical subnets (includes switches and/or routers between the physical entities). In another embodiment, virtual machines belonging to different virtual networks may be hosted on the same physical host. In yet another embodiment, the virtual network abstractions support virtual machine migration anywhere in a data center without changing the virtual machine's network address and losing its network connection.
- For further details regarding this architecture, see “Virtual Switch Data Control in a Distributed Overlay Network,” Ser. No. 13/204,211, filed Aug. 5, 2011, which is incorporated herein by reference.
-
FIG. 12 illustratesinformation handling system 1200, which is a simplified example of a computer system capable of performing the computing operations described herein.Information handling system 1200 includes one ormore processors 1210 coupled toprocessor interface bus 1212.Processor interface bus 1212 connectsprocessors 1210 toNorthbridge 1215, which is also known as the Memory Controller Hub (MCH).Northbridge 1215 connects tosystem memory 1220 and provides a means for processor(s) 1210 to access the system memory.Graphics controller 1225 also connects toNorthbridge 1215. In one embodiment,PCI Express bus 1218 connectsNorthbridge 1215 tographics controller 1225.Graphics controller 1225 connects to displaydevice 1230, such as a computer monitor. -
Northbridge 1215 andSouthbridge 1235 connect to each other usingbus 1219. In one embodiment, the bus is a Direct Media Interface (DMI) bus that transfers data at high speeds in each direction betweenNorthbridge 1215 andSouthbridge 1235. In another embodiment, a Peripheral Component Interconnect (PCI) bus connects the Northbridge and the Southbridge.Southbridge 1235, also known as the I/O Controller Hub (ICH) is a chip that generally implements capabilities that operate at slower speeds than the capabilities provided by the Northbridge.Southbridge 1235 typically provides various busses used to connect various components. These busses include, for example, PCI and PCI Express busses, an ISA bus, a System Management Bus (SMBus or SMB), and/or a Low Pin Count (LPC) bus. The LPC bus often connects low-bandwidth devices, such asboot ROM 1296 and “legacy” I/O devices (using a “super I/O” chip). The “legacy” I/O devices (1298) can include, for example, serial and parallel ports, keyboard, mouse, and/or a floppy disk controller. The LPC bus also connectsSouthbridge 1235 to Trusted Platform Module (TPM) 1295. Other components often included inSouthbridge 1235 include a Direct Memory Access (DMA) controller, a Programmable Interrupt Controller (PIC), and a storage device controller, which connectsSouthbridge 1235 tononvolatile storage device 1285, such as a hard disk drive, usingbus 1284. -
ExpressCard 1255 is a slot that connects hot-pluggable devices to the information handling system.ExpressCard 1255 supports both PCI Express and USB connectivity as it connects toSouthbridge 1235 using both the Universal Serial Bus (USB) the PCI Express bus.Southbridge 1235 includesUSB Controller 1240 that provides USB connectivity to devices that connect to the USB. These devices include webcam (camera) 1250, infrared (IR)receiver 1248, keyboard andtrackpad 1244, andBluetooth device 1246, which provides for wireless personal area networks (PANs).USB Controller 1240 also provides USB connectivity to other miscellaneous USB connecteddevices 1242, such as a mouse, removable nonvolatile storage device 1245, modems, network cards, ISDN connectors, fax, printers, USB hubs, and many other types of USB connected devices. While removable nonvolatile storage device 1245 is shown as a USB-connected device, removable nonvolatile storage device 1245 could be connected using a different interface, such as a Firewire interface, etcetera. - Wireless Local Area Network (LAN)
device 1275 connects toSouthbridge 1235 via the PCI orPCI Express bus 1272.LAN device 1275 typically implements one of the IEEE 802.11 standards of over-the-air modulation techniques that all use the same protocol to wirelessly communicate betweeninformation handling system 1200 and another computer system or device.Optical storage device 1290 connects toSouthbridge 1235 using Serial ATA (SATA)bus 1288. Serial ATA adapters and devices communicate over a high-speed serial link. The Serial ATA bus also connectsSouthbridge 1235 to other forms of storage devices, such as hard disk drives.Audio circuitry 1260, such as a sound card, connects toSouthbridge 1235 viabus 1258.Audio circuitry 1260 also provides functionality such as audio line-in and optical digital audio inport 1262, optical digital output andheadphone jack 1264,internal speakers 1266, andinternal microphone 1268.Ethernet controller 1270 connects toSouthbridge 1235 using a bus, such as the PCI or PCI Express bus.Ethernet controller 1270 connectsinformation handling system 1200 to a computer network, such as a Local Area Network (LAN), the Internet, and other public and private computer networks. - While
FIG. 12 shows one information handling system, an information handling system may take many forms. For example, an information handling system may take the form of a desktop, server, portable, laptop, notebook, or other form factor computer or data processing system. In addition, an information handling system may take other form factors such as a personal digital assistant (PDA), a gaming device, ATM machine, a portable telephone device, a communication device or other devices that include a processor and memory. - The Trusted Platform Module (TPM 1295) shown in
FIG. 12 and described herein to provide security functions is but one example of a hardware security module (HSM). Therefore, the TPM described and claimed herein includes any type of HSM including, but not limited to, hardware security devices that conform to the Trusted Computing Groups (TCG) standard, and entitled “Trusted Platform Module (TPM) Specification Version 1.2.” The TPM is a hardware security subsystem that may be incorporated into any number of information handling systems, such as those outlined inFIG. 13 . -
FIG. 13 provides an extension of the information handling system environment shown inFIG. 12 to illustrate that the methods described herein can be performed on a wide variety of information handling systems that operate in a networked environment. Types of information handling systems range from small handheld devices, such as handheld computer/mobile telephone 1310 to large mainframe systems, such asmainframe computer 1370. Examples ofhandheld computer 1310 include personal digital assistants (PDAs), personal entertainment devices, such as MP3 players, portable televisions, and compact disc players. Other examples of information handling systems include pen, or tablet,computer 1320, laptop, or notebook,computer 1330,workstation 1340,personal computer system 1350, andserver 1360. Other types of information handling systems that are not individually shown inFIG. 13 are represented byinformation handling system 1380. As shown, the various information handling systems can be networked together using computer network 1300. Types of computer network that can be used to interconnect the various information handling systems include Local Area Networks (LANs), Wireless Local Area Networks (WLANs), the Internet, the Public Switched Telephone Network (PSTN), other wireless networks, and any other network topology that can be used to interconnect the information handling systems. Many of the information handling systems include nonvolatile data stores, such as hard drives and/or nonvolatile memory. Some of the information handling systems shown inFIG. 13 depicts separate nonvolatile data stores (server 1360 utilizesnonvolatile data store 1365,mainframe computer 1370 utilizesnonvolatile data store 1375, andinformation handling system 1380 utilizes nonvolatile data store 1385). The nonvolatile data store can be a component that is external to the various information handling systems or can be internal to one of the information handling systems. In addition, removable nonvolatile storage device 1245 can be shared among two or more information handling systems using various techniques, such as connecting the removable nonvolatile storage device 1245 to a USB port or other connector of the information handling systems. - While particular embodiments of the present disclosure have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, that changes and modifications may be made without departing from this disclosure and its broader aspects. Therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this disclosure. Furthermore, it is to be understood that the disclosure is solely defined by the appended claims. It will be understood by those with skill in the art that if a specific number of an introduced claim element is intended, such intent will be explicitly recited in the claim, and in the absence of such recitation no such limitation is present. For non-limiting example, as an aid to understanding, the following appended claims contain usage of the introductory phrases “at least one” and “one or more” to introduce claim elements. However, the use of such phrases should not be construed to imply that the introduction of a claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to disclosures containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an”; the same holds true for the use in the claims of definite articles.
Claims (25)
1. (canceled)
2. (canceled)
3. (canceled)
4. (canceled)
5. (canceled)
6. (canceled)
7. (canceled)
8. (canceled)
9. An information handling system comprising:
one or more processors;
a memory coupled to at least one of the processors;
a set of computer program instructions stored in the memory and executed by at least one of the processors in order to perform actions of:
receiving a message to migrate a virtual machine executing on a first system to a second system, wherein the first system includes a first network adapter used to send data packets over a computer network;
extracting hardware state data stored in a native format in a first memory area of the first network adapter, wherein the hardware state data is used to process the data packets generated by the virtual machine;
migrating the virtual machine to the second system, wherein the migrating includes copying the extracted hardware state data from the first memory to the second system; and
configuring a second network adapter included on the second system, wherein the configuration includes writing the hardware state data to a second memory included on the second network adapter.
10. The information handling system of claim 9 wherein the processors perform additional actions comprising:
wherein the extracting includes storing the hardware state data in a first shared memory area in the first system at a memory block starting address; and
wherein the configuring includes retrieving the hardware state data from a second shared memory area in the second system at the memory block starting address.
11. The information handling system of claim 9 wherein the processors perform additional actions comprising:
utilizing the hardware state data in the native format to perform the configuring of the second network adapter.
12. The information handling system of claim 9 wherein the processors perform additional actions comprising:
establishing a connection between a first hypervisor included in the first system and a second hypervisor included in the second system;
allocating, by the second hypervisor, memory space on the second system on which to migrate the virtual machine; and
streaming the virtual machine from the first hypervisor to the second hypervisor, the second hypervisor storing the virtual machine in the allocated memory space.
13. The information handling system of claim 12 wherein the processors perform additional actions comprising:
sending, by the second hypervisor, a state insert command to the second network adapter, the state insert command instructing the second network adapter to retrieve the hardware state data from a memory block starting address included in the allocated memory space.
14. The information handling system of claim 9 wherein the processors perform additional actions comprising:
quiescing, by the first system prior to the migration, I/O and memory transactions corresponding to the virtual machine; and
resuming execution of the virtual machine, by the second system after the migration, at a state immediately prior to the quiescing at the first system.
15. The information handling system of claim 9 wherein the first network adapter processes the data packets according to a stateful offload format that is selected from the group consisting of a Remote Direct Memory Access (RDMA) format, an Internet Wide RDMA Protocol (iWARP) format, an Infiniband (IB) format, and a TCP Offload Engine (TOE) format.
16. The information handling system of claim 9 wherein the processors perform additional actions comprising:
wherein the data packets are sent by the first network adapter through an overlay network environment, the overlay network environment including one or more virtual networks that are independent of physical topology constraints of a physical network; and
wherein the overlay network environment includes a distributed policy service, the distributed policy service determining that the first network adapter and the second network adapter are equivalent.
17. A computer program product stored in a computer readable storage medium, comprising computer program code that, when executed by an information handling system, causes the information handling system to perform actions comprising:
receiving a message to migrate a virtual machine executing on a first system to a second system, wherein the first system includes a first network adapter used to send data packets over a computer network;
extracting hardware state data stored in a native format in a first memory area of the first network adapter, wherein the hardware state data is used to process the data packets generated by the virtual machine;
migrating the virtual machine to the second system, wherein the migrating includes copying the extracted hardware state data from the first memory to the second system; and
configuring a second network adapter included on the second system, wherein the configuration includes writing the hardware state data to a second memory included on the second network adapter.
18. The computer program product of claim 17 wherein the information handling system performs additional actions comprising:
wherein the extracting includes storing the hardware state data in a first shared memory area in the first system at a memory block starting address; and
wherein the configuring includes retrieving the hardware state data from a second shared memory area in the second system at the memory block starting address.
19. The computer program product of claim 17 wherein the information handling system performs additional actions comprising:
utilizing the hardware state data in the native format to perform the configuring of the second network adapter.
20. The computer program product of claim 17 wherein the information handling system performs additional actions comprising:
establishing a connection between a first hypervisor included in the first system and a second hypervisor included in the second system;
allocating, by the second hypervisor, memory space on the second system on which to migrate the virtual machine; and
streaming the virtual machine from the first hypervisor to the second hypervisor, the second hypervisor storing the virtual machine in the allocated memory space.
21. The computer program product of claim 20 wherein the information handling system performs additional actions comprising:
sending, by the second hypervisor, a state insert command to the second network adapter, the state insert command instructing the second network adapter to retrieve the hardware state data from a memory block starting address included in the allocated memory space.
22. The computer program product of claim 17 the information handling system performs additional actions comprising:
quiescing, by the first system prior to the migration, I/O and memory transactions corresponding to the virtual machine; and
resuming execution of the virtual machine, by the second system after the migration, at a state immediately prior to the quiescing at the first system.
23. The computer program product of claim 17 wherein the first network adapter processes the data packets according to a stateful offload format that is selected from the group consisting of a Remote Direct Memory Access (RDMA) format, an Internet Wide RDMA Protocol (iWARP) format, an Infiniband (IB) format, and a TCP Offload Engine (TOE) format.
24. The computer program product of claim 17 the information handling system performs additional actions comprising:
wherein the data packets are sent by the first network adapter through an overlay network environment, the overlay network environment including one or more virtual networks that are independent of physical topology constraints of a physical network; and
wherein the overlay network environment includes a distributed policy service, the distributed policy service determining that the first network adapter and the second network adapter are equivalent.
25. (canceled)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/252,676 US20130086298A1 (en) | 2011-10-04 | 2011-10-04 | Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion |
US13/584,059 US9588807B2 (en) | 2011-10-04 | 2012-08-13 | Live logical partition migration with stateful offload connections using context extraction and insertion |
GB1407143.5A GB2509463B (en) | 2011-10-04 | 2012-09-26 | Live logical partition migration with stateful offload connections using context extraction and insertion |
DE112012003776.6T DE112012003776T5 (en) | 2011-10-04 | 2012-09-26 | Migration of logical partitions with stateful swap data connections during operation using context triggering and insertion |
PCT/CN2012/082051 WO2013049990A1 (en) | 2011-10-04 | 2012-09-26 | Live logical partition migration with stateful offload connections using context extraction and insertion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/252,676 US20130086298A1 (en) | 2011-10-04 | 2011-10-04 | Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/584,059 Continuation US9588807B2 (en) | 2011-10-04 | 2012-08-13 | Live logical partition migration with stateful offload connections using context extraction and insertion |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130086298A1 true US20130086298A1 (en) | 2013-04-04 |
Family
ID=47993703
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/252,676 Abandoned US20130086298A1 (en) | 2011-10-04 | 2011-10-04 | Live Logical Partition Migration with Stateful Offload Connections Using Context Extraction and Insertion |
US13/584,059 Expired - Fee Related US9588807B2 (en) | 2011-10-04 | 2012-08-13 | Live logical partition migration with stateful offload connections using context extraction and insertion |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/584,059 Expired - Fee Related US9588807B2 (en) | 2011-10-04 | 2012-08-13 | Live logical partition migration with stateful offload connections using context extraction and insertion |
Country Status (4)
Country | Link |
---|---|
US (2) | US20130086298A1 (en) |
DE (1) | DE112012003776T5 (en) |
GB (1) | GB2509463B (en) |
WO (1) | WO2013049990A1 (en) |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130083690A1 (en) * | 2011-10-04 | 2013-04-04 | International Business Machines Corporation | Network Adapter Hardware State Migration Discovery in a Stateful Environment |
US20130138995A1 (en) * | 2011-11-30 | 2013-05-30 | Oracle International Corporation | Dynamic hypervisor relocation |
US20140365816A1 (en) * | 2013-06-05 | 2014-12-11 | Vmware, Inc. | System and method for assigning memory reserved for high availability failover to virtual machines |
US9032160B1 (en) * | 2011-12-29 | 2015-05-12 | Emc Corporation | Continuous data replication |
US9053068B2 (en) | 2013-09-25 | 2015-06-09 | Red Hat Israel, Ltd. | RDMA-based state transfer in virtual machine live migration |
US20150277879A1 (en) * | 2014-03-31 | 2015-10-01 | International Business Machines Corporation | Partition mobility for partitions with extended code |
CN105184192A (en) * | 2015-08-26 | 2015-12-23 | 宇龙计算机通信科技(深圳)有限公司 | Audio data processing method and apparatus for dual operation system |
US20150378760A1 (en) * | 2014-06-27 | 2015-12-31 | Vmware, Inc. | Network-based signaling to control virtual machine placement |
US9237188B1 (en) * | 2012-05-21 | 2016-01-12 | Amazon Technologies, Inc. | Virtual machine based content processing |
WO2015167538A3 (en) * | 2014-04-30 | 2016-04-28 | Hewlett Packard Enterprise Development Lp | Migrating objects from a source service to a target service |
US20160139944A1 (en) * | 2014-11-13 | 2016-05-19 | Freescale Semiconductor, Inc. | Method and Apparatus for Combined Hardware/Software VM Migration |
US20170242756A1 (en) * | 2016-02-22 | 2017-08-24 | International Business Machines Corporation | Live partition mobility with i/o migration |
US9760512B1 (en) * | 2016-10-21 | 2017-09-12 | International Business Machines Corporation | Migrating DMA mappings from a source I/O adapter of a source computing system to a destination I/O adapter of a destination computing system |
US9785451B1 (en) | 2016-10-21 | 2017-10-10 | International Business Machines Corporation | Migrating MMIO from a source I/O adapter of a computing system to a destination I/O adapter of the computing system |
US9787590B2 (en) | 2014-03-25 | 2017-10-10 | Mellanox Technologies, Ltd. | Transport-level bonding |
US9846602B2 (en) * | 2016-02-12 | 2017-12-19 | International Business Machines Corporation | Migration of a logical partition or virtual machine with inactive input/output hosting server |
US9875060B1 (en) | 2016-10-21 | 2018-01-23 | International Business Machines Corporation | Migrating MMIO from a source I/O adapter of a source computing system to a destination I/O adapter of a destination computing system |
US9892070B1 (en) | 2016-10-21 | 2018-02-13 | International Business Machines Corporation | Migrating interrupts from a source I/O adapter of a computing system to a destination I/O adapter of the computing system |
US9916267B1 (en) | 2016-10-21 | 2018-03-13 | International Business Machines Corporation | Migrating interrupts from a source I/O adapter of a source computing system to a destination I/O adapter of a destination computing system |
US9923800B2 (en) | 2014-10-26 | 2018-03-20 | Microsoft Technology Licensing, Llc | Method for reachability management in computer networks |
US9936014B2 (en) | 2014-10-26 | 2018-04-03 | Microsoft Technology Licensing, Llc | Method for virtual machine migration in computer networks |
US10002059B2 (en) | 2013-06-13 | 2018-06-19 | Vmware, Inc. | System and method for assigning memory available for high availability failover to virtual machines |
US10002018B2 (en) * | 2016-02-23 | 2018-06-19 | International Business Machines Corporation | Migrating single root I/O virtualization adapter configurations in a computing system |
US10025584B2 (en) | 2016-02-29 | 2018-07-17 | International Business Machines Corporation | Firmware management of SR-IOV adapters |
US10038629B2 (en) | 2014-09-11 | 2018-07-31 | Microsoft Technology Licensing, Llc | Virtual machine migration using label based underlay network forwarding |
US10042723B2 (en) | 2016-02-23 | 2018-08-07 | International Business Machines Corporation | Failover of a virtual function exposed by an SR-IOV adapter |
US10230607B2 (en) | 2016-01-28 | 2019-03-12 | Oracle International Corporation | System and method for using subnet prefix values in global route header (GRH) for linear forwarding table (LFT) lookup in a high performance computing environment |
US10333894B2 (en) | 2016-01-28 | 2019-06-25 | Oracle International Corporation | System and method for supporting flexible forwarding domain boundaries in a high performance computing environment |
US10348847B2 (en) | 2016-01-28 | 2019-07-09 | Oracle International Corporation | System and method for supporting proxy based multicast forwarding in a high performance computing environment |
US10348649B2 (en) | 2016-01-28 | 2019-07-09 | Oracle International Corporation | System and method for supporting partitioned switch forwarding tables in a high performance computing environment |
US10355972B2 (en) | 2016-01-28 | 2019-07-16 | Oracle International Corporation | System and method for supporting flexible P_Key mapping in a high performance computing environment |
US10360058B2 (en) * | 2016-11-28 | 2019-07-23 | International Business Machines Corporation | Input/output component selection for virtual machine migration |
US10536334B2 (en) | 2016-01-28 | 2020-01-14 | Oracle International Corporation | System and method for supporting subnet number aliasing in a high performance computing environment |
US10579437B2 (en) | 2016-12-01 | 2020-03-03 | International Business Machines Corporation | Migrating a logical partition with a native logical port |
US10616118B2 (en) | 2016-01-28 | 2020-04-07 | Oracle International Corporation | System and method for supporting aggressive credit waiting in a high performance computing environment |
US10630816B2 (en) | 2016-01-28 | 2020-04-21 | Oracle International Corporation | System and method for supporting shared multicast local identifiers (MILD) ranges in a high performance computing environment |
US10659340B2 (en) | 2016-01-28 | 2020-05-19 | Oracle International Corporation | System and method for supporting VM migration between subnets in a high performance computing environment |
US10666611B2 (en) | 2016-01-28 | 2020-05-26 | Oracle International Corporation | System and method for supporting multiple concurrent SL to VL mappings in a high performance computing environment |
US10877794B2 (en) * | 2010-12-10 | 2020-12-29 | Amazon Technologies, Inc. | Virtual machine morphing for heterogeneous migration environments |
US10942758B2 (en) * | 2017-04-17 | 2021-03-09 | Hewlett Packard Enterprise Development Lp | Migrating virtual host bus adaptors between sets of host bus adaptors of a target device in order to reallocate bandwidth to enable virtual machine migration |
US10956242B1 (en) * | 2017-12-06 | 2021-03-23 | Amazon Technologies, Inc. | Automating the migration of web service implementations to a service provider system |
US11005710B2 (en) | 2015-08-18 | 2021-05-11 | Microsoft Technology Licensing, Llc | Data center resource tracking |
US20220114070A1 (en) * | 2012-12-28 | 2022-04-14 | Iii Holdings 2, Llc | System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes |
US20220129299A1 (en) * | 2016-12-02 | 2022-04-28 | Vmware, Inc. | System and Method for Managing Size of Clusters in a Computing Environment |
US11474857B1 (en) * | 2020-05-06 | 2022-10-18 | Amazon Technologies, Inc. | Accelerated migration of compute instances using offload cards |
US11658916B2 (en) | 2005-03-16 | 2023-05-23 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US11656907B2 (en) | 2004-11-08 | 2023-05-23 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US11765101B2 (en) | 2005-04-07 | 2023-09-19 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11870647B1 (en) | 2021-09-01 | 2024-01-09 | Amazon Technologies, Inc. | Mapping on-premise network nodes to cloud network nodes |
US11960937B2 (en) | 2004-03-13 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
US12009996B2 (en) | 2004-06-18 | 2024-06-11 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US12120040B2 (en) | 2005-03-16 | 2024-10-15 | Iii Holdings 12, Llc | On-demand compute environment |
US12124878B2 (en) | 2022-03-17 | 2024-10-22 | Iii Holdings 12, Llc | System and method for scheduling resources within a compute environment using a scheduler process with reservation mask function |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9049257B2 (en) * | 2011-12-19 | 2015-06-02 | Vmware, Inc. | Methods and apparatus for an E-mail-based management interface for virtualized environments |
US9250827B2 (en) | 2012-12-14 | 2016-02-02 | Vmware, Inc. | Storing checkpoint file in high performance storage device for rapid virtual machine suspend and resume |
US9654390B2 (en) | 2013-09-03 | 2017-05-16 | Cisco Technology, Inc. | Method and apparatus for improving cloud routing service performance |
US9798574B2 (en) | 2013-09-27 | 2017-10-24 | Intel Corporation | Techniques to compose memory resources across devices |
US9928093B2 (en) | 2015-02-24 | 2018-03-27 | Red Hat Israel, Ltd. | Methods and systems for establishing connections associated with virtual machine migrations |
US10305976B2 (en) * | 2015-09-21 | 2019-05-28 | Intel Corporation | Method and apparatus for dynamically offloading execution of machine code in an application to a virtual machine |
US10901781B2 (en) | 2018-09-13 | 2021-01-26 | Cisco Technology, Inc. | System and method for migrating a live stateful container |
CN112002080B (en) * | 2019-05-27 | 2022-02-15 | 中电金融设备系统(深圳)有限公司 | Bank terminal, bank terminal equipment and information security processing method |
US12039365B2 (en) | 2021-03-30 | 2024-07-16 | International Business Machines Corporation | Program context migration |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110022812A1 (en) * | 2009-05-01 | 2011-01-27 | Van Der Linden Rob | Systems and methods for establishing a cloud bridge between virtual storage resources |
US8166265B1 (en) * | 2008-07-14 | 2012-04-24 | Vizioncore, Inc. | Systems and methods for performing backup operations of virtual machine files |
US8352938B2 (en) * | 2004-05-11 | 2013-01-08 | International Business Machines Corporation | System, method and program to migrate a virtual machine |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8156490B2 (en) | 2004-05-08 | 2012-04-10 | International Business Machines Corporation | Dynamic migration of virtual machine computer programs upon satisfaction of conditions |
US7656894B2 (en) | 2005-10-28 | 2010-02-02 | Microsoft Corporation | Offloading processing tasks to a peripheral device |
US8521912B2 (en) | 2006-01-12 | 2013-08-27 | Broadcom Corporation | Method and system for direct device access |
US7484029B2 (en) | 2006-02-09 | 2009-01-27 | International Business Machines Corporation | Method, apparatus, and computer usable program code for migrating virtual adapters from source physical adapters to destination physical adapters |
US20080189432A1 (en) | 2007-02-02 | 2008-08-07 | International Business Machines Corporation | Method and system for vm migration in an infiniband network |
US8005013B2 (en) | 2007-06-12 | 2011-08-23 | Hewlett-Packard Development Company, L.P. | Managing connectivity in a virtual network |
US7937698B2 (en) | 2007-08-02 | 2011-05-03 | International Business Machines Corporation | Extensible mechanism for automatically migrating resource adapter components in a development environment |
CN100553214C (en) | 2007-09-17 | 2009-10-21 | 北京航空航天大学 | Mobile virtual environment system |
US7984123B2 (en) | 2007-12-10 | 2011-07-19 | Oracle America, Inc. | Method and system for reconfiguring a virtual network path |
US8146082B2 (en) | 2009-03-25 | 2012-03-27 | Vmware, Inc. | Migrating virtual machines configured with pass-through devices |
US8335943B2 (en) | 2009-06-22 | 2012-12-18 | Citrix Systems, Inc. | Systems and methods for stateful session failover between multi-core appliances |
CN101593133B (en) | 2009-06-29 | 2012-07-04 | 北京航空航天大学 | Method and device for load balancing of resources of virtual machine |
US8504690B2 (en) | 2009-08-07 | 2013-08-06 | Broadcom Corporation | Method and system for managing network power policy and configuration of data center bridging |
US9158567B2 (en) | 2009-10-20 | 2015-10-13 | Dell Products, Lp | System and method for reconfigurable network services using modified network configuration with modified bandwith capacity in dynamic virtualization environments |
US8244957B2 (en) | 2010-02-26 | 2012-08-14 | Red Hat Israel, Ltd. | Mechanism for dynamic placement of virtual machines during live migration based on memory |
US8510590B2 (en) | 2010-03-17 | 2013-08-13 | Vmware, Inc. | Method and system for cluster resource management in a virtualized computing environment |
US9223616B2 (en) | 2011-02-28 | 2015-12-29 | Red Hat Israel, Ltd. | Virtual machine resource reduction for live migration optimization |
US20130034094A1 (en) | 2011-08-05 | 2013-02-07 | International Business Machines Corporation | Virtual Switch Data Control In A Distributed Overlay Network |
US8660124B2 (en) | 2011-08-05 | 2014-02-25 | International Business Machines Corporation | Distributed overlay network data traffic management by a virtual server |
US8782128B2 (en) | 2011-10-18 | 2014-07-15 | International Business Machines Corporation | Global queue pair management in a point-to-point computer network |
-
2011
- 2011-10-04 US US13/252,676 patent/US20130086298A1/en not_active Abandoned
-
2012
- 2012-08-13 US US13/584,059 patent/US9588807B2/en not_active Expired - Fee Related
- 2012-09-26 DE DE112012003776.6T patent/DE112012003776T5/en not_active Ceased
- 2012-09-26 WO PCT/CN2012/082051 patent/WO2013049990A1/en active Application Filing
- 2012-09-26 GB GB1407143.5A patent/GB2509463B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8352938B2 (en) * | 2004-05-11 | 2013-01-08 | International Business Machines Corporation | System, method and program to migrate a virtual machine |
US8166265B1 (en) * | 2008-07-14 | 2012-04-24 | Vizioncore, Inc. | Systems and methods for performing backup operations of virtual machine files |
US20110022812A1 (en) * | 2009-05-01 | 2011-01-27 | Van Der Linden Rob | Systems and methods for establishing a cloud bridge between virtual storage resources |
Non-Patent Citations (1)
Title |
---|
Broadcom, Broadcom Network Controller Enhanced Virualization Functionality. October 2009 * |
Cited By (89)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11960937B2 (en) | 2004-03-13 | 2024-04-16 | Iii Holdings 12, Llc | System and method for an optimizing reservation in time of compute resources based on prioritization function and reservation policy parameter |
US12009996B2 (en) | 2004-06-18 | 2024-06-11 | Iii Holdings 12, Llc | System and method for providing dynamic provisioning within a compute environment |
US11709709B2 (en) | 2004-11-08 | 2023-07-25 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11861404B2 (en) | 2004-11-08 | 2024-01-02 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11886915B2 (en) | 2004-11-08 | 2024-01-30 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11762694B2 (en) | 2004-11-08 | 2023-09-19 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US12008405B2 (en) | 2004-11-08 | 2024-06-11 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US12039370B2 (en) | 2004-11-08 | 2024-07-16 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11656907B2 (en) | 2004-11-08 | 2023-05-23 | Iii Holdings 12, Llc | System and method of providing system jobs within a compute environment |
US11658916B2 (en) | 2005-03-16 | 2023-05-23 | Iii Holdings 12, Llc | Simple integration of an on-demand compute environment |
US12120040B2 (en) | 2005-03-16 | 2024-10-15 | Iii Holdings 12, Llc | On-demand compute environment |
US11765101B2 (en) | 2005-04-07 | 2023-09-19 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11831564B2 (en) | 2005-04-07 | 2023-11-28 | Iii Holdings 12, Llc | On-demand access to compute resources |
US11720290B2 (en) | 2009-10-30 | 2023-08-08 | Iii Holdings 2, Llc | Memcached server functionality in a cluster of data processing nodes |
US10877794B2 (en) * | 2010-12-10 | 2020-12-29 | Amazon Technologies, Inc. | Virtual machine morphing for heterogeneous migration environments |
US8830870B2 (en) | 2011-10-04 | 2014-09-09 | International Business Machines Corporation | Network adapter hardware state migration discovery in a stateful environment |
US20130083690A1 (en) * | 2011-10-04 | 2013-04-04 | International Business Machines Corporation | Network Adapter Hardware State Migration Discovery in a Stateful Environment |
US8793528B2 (en) * | 2011-11-30 | 2014-07-29 | Oracle International Corporation | Dynamic hypervisor relocation |
US20130138995A1 (en) * | 2011-11-30 | 2013-05-30 | Oracle International Corporation | Dynamic hypervisor relocation |
US9032160B1 (en) * | 2011-12-29 | 2015-05-12 | Emc Corporation | Continuous data replication |
US9235481B1 (en) * | 2011-12-29 | 2016-01-12 | Emc Corporation | Continuous data replication |
US10649801B2 (en) | 2012-05-21 | 2020-05-12 | Amazon Technologies, Inc. | Virtual machine based content processing |
US9237188B1 (en) * | 2012-05-21 | 2016-01-12 | Amazon Technologies, Inc. | Virtual machine based content processing |
US9875134B2 (en) | 2012-05-21 | 2018-01-23 | Amazon Technologies, Inc. | Virtual machine based content processing |
US20220114070A1 (en) * | 2012-12-28 | 2022-04-14 | Iii Holdings 2, Llc | System, Method and Computer Readable Medium for Offloaded Computation of Distributed Application Protocols within a Cluster of Data Processing Nodes |
US9830236B2 (en) * | 2013-06-05 | 2017-11-28 | Vmware, Inc. | System and method for assigning memory reserved for high availability failover to virtual machines |
US20140365816A1 (en) * | 2013-06-05 | 2014-12-11 | Vmware, Inc. | System and method for assigning memory reserved for high availability failover to virtual machines |
US10002059B2 (en) | 2013-06-13 | 2018-06-19 | Vmware, Inc. | System and method for assigning memory available for high availability failover to virtual machines |
US9053068B2 (en) | 2013-09-25 | 2015-06-09 | Red Hat Israel, Ltd. | RDMA-based state transfer in virtual machine live migration |
US9787590B2 (en) | 2014-03-25 | 2017-10-10 | Mellanox Technologies, Ltd. | Transport-level bonding |
US20150277879A1 (en) * | 2014-03-31 | 2015-10-01 | International Business Machines Corporation | Partition mobility for partitions with extended code |
US9870210B2 (en) | 2014-03-31 | 2018-01-16 | International Business Machines Corporation | Partition mobility for partitions with extended code |
US9858058B2 (en) * | 2014-03-31 | 2018-01-02 | International Business Machines Corporation | Partition mobility for partitions with extended code |
WO2015167538A3 (en) * | 2014-04-30 | 2016-04-28 | Hewlett Packard Enterprise Development Lp | Migrating objects from a source service to a target service |
US20150378760A1 (en) * | 2014-06-27 | 2015-12-31 | Vmware, Inc. | Network-based signaling to control virtual machine placement |
US11182185B2 (en) * | 2014-06-27 | 2021-11-23 | Vmware, Inc. | Network-based signaling to control virtual machine placement |
US10038629B2 (en) | 2014-09-11 | 2018-07-31 | Microsoft Technology Licensing, Llc | Virtual machine migration using label based underlay network forwarding |
US9936014B2 (en) | 2014-10-26 | 2018-04-03 | Microsoft Technology Licensing, Llc | Method for virtual machine migration in computer networks |
US9923800B2 (en) | 2014-10-26 | 2018-03-20 | Microsoft Technology Licensing, Llc | Method for reachability management in computer networks |
US9811367B2 (en) * | 2014-11-13 | 2017-11-07 | Nsp Usa, Inc. | Method and apparatus for combined hardware/software VM migration |
US20160139944A1 (en) * | 2014-11-13 | 2016-05-19 | Freescale Semiconductor, Inc. | Method and Apparatus for Combined Hardware/Software VM Migration |
US11005710B2 (en) | 2015-08-18 | 2021-05-11 | Microsoft Technology Licensing, Llc | Data center resource tracking |
CN105184192A (en) * | 2015-08-26 | 2015-12-23 | 宇龙计算机通信科技(深圳)有限公司 | Audio data processing method and apparatus for dual operation system |
US11190429B2 (en) | 2016-01-28 | 2021-11-30 | Oracle International Corporation | System and method for allowing multiple global identifier (GID) subnet prefix values concurrently for incoming packet processing in a high performance computing environment |
US11082543B2 (en) | 2016-01-28 | 2021-08-03 | Oracle International Corporation | System and method for supporting shared multicast local identifiers (MLID) ranges in a high performance computing environment |
US10284448B2 (en) | 2016-01-28 | 2019-05-07 | Oracle International Corporation | System and method for using Q_Key value enforcement as a flexible way of providing resource access control within a single partition in a high performance computing environment |
US10536334B2 (en) | 2016-01-28 | 2020-01-14 | Oracle International Corporation | System and method for supporting subnet number aliasing in a high performance computing environment |
US11233698B2 (en) | 2016-01-28 | 2022-01-25 | Oracle International Corporation | System and method for supporting subnet number aliasing in a high performance computing environment |
US10581711B2 (en) | 2016-01-28 | 2020-03-03 | Oracle International Corporation | System and method for policing network traffic flows using a ternary content addressable memory in a high performance computing environment |
US10616118B2 (en) | 2016-01-28 | 2020-04-07 | Oracle International Corporation | System and method for supporting aggressive credit waiting in a high performance computing environment |
US10630816B2 (en) | 2016-01-28 | 2020-04-21 | Oracle International Corporation | System and method for supporting shared multicast local identifiers (MILD) ranges in a high performance computing environment |
US10637761B2 (en) | 2016-01-28 | 2020-04-28 | Oracle International Corporation | System and method for using Q_KEY value enforcement as a flexible way of providing resource access control within a single partition in a high performance computing environment |
US11824749B2 (en) | 2016-01-28 | 2023-11-21 | Oracle International Corporation | System and method for allowing multiple global identifier (GID) subnet prefix values concurrently for incoming packet processing in a high performance computing environment |
US10659340B2 (en) | 2016-01-28 | 2020-05-19 | Oracle International Corporation | System and method for supporting VM migration between subnets in a high performance computing environment |
US10666611B2 (en) | 2016-01-28 | 2020-05-26 | Oracle International Corporation | System and method for supporting multiple concurrent SL to VL mappings in a high performance computing environment |
US10374926B2 (en) * | 2016-01-28 | 2019-08-06 | Oracle International Corporation | System and method for monitoring logical network traffic flows using a ternary content addressable memory in a high performance computing environment |
US10333894B2 (en) | 2016-01-28 | 2019-06-25 | Oracle International Corporation | System and method for supporting flexible forwarding domain boundaries in a high performance computing environment |
US10868746B2 (en) | 2016-01-28 | 2020-12-15 | Oracle International Corporation | System and method for using subnet prefix values in global route header (GRH) for linear forwarding table (LFT) lookup in a high performance computing environment |
US10355972B2 (en) | 2016-01-28 | 2019-07-16 | Oracle International Corporation | System and method for supporting flexible P_Key mapping in a high performance computing environment |
US10348847B2 (en) | 2016-01-28 | 2019-07-09 | Oracle International Corporation | System and method for supporting proxy based multicast forwarding in a high performance computing environment |
US10230607B2 (en) | 2016-01-28 | 2019-03-12 | Oracle International Corporation | System and method for using subnet prefix values in global route header (GRH) for linear forwarding table (LFT) lookup in a high performance computing environment |
US10348649B2 (en) | 2016-01-28 | 2019-07-09 | Oracle International Corporation | System and method for supporting partitioned switch forwarding tables in a high performance computing environment |
US11496402B2 (en) | 2016-01-28 | 2022-11-08 | Oracle International Corporation | System and method for supporting aggressive credit waiting in a high performance computing environment |
US11140065B2 (en) | 2016-01-28 | 2021-10-05 | Oracle International Corporation | System and method for supporting VM migration between subnets in a high performance computing environment |
US11140057B2 (en) * | 2016-01-28 | 2021-10-05 | Oracle International Corporation | System and method for monitoring logical network traffic flows using a ternary content addressable memory in a high performance computing environment |
US9846602B2 (en) * | 2016-02-12 | 2017-12-19 | International Business Machines Corporation | Migration of a logical partition or virtual machine with inactive input/output hosting server |
US10042720B2 (en) * | 2016-02-22 | 2018-08-07 | International Business Machines Corporation | Live partition mobility with I/O migration |
US10761949B2 (en) | 2016-02-22 | 2020-09-01 | International Business Machines Corporation | Live partition mobility with I/O migration |
US20170242756A1 (en) * | 2016-02-22 | 2017-08-24 | International Business Machines Corporation | Live partition mobility with i/o migration |
US10691561B2 (en) | 2016-02-23 | 2020-06-23 | International Business Machines Corporation | Failover of a virtual function exposed by an SR-IOV adapter |
US10042723B2 (en) | 2016-02-23 | 2018-08-07 | International Business Machines Corporation | Failover of a virtual function exposed by an SR-IOV adapter |
US10002018B2 (en) * | 2016-02-23 | 2018-06-19 | International Business Machines Corporation | Migrating single root I/O virtualization adapter configurations in a computing system |
US10025584B2 (en) | 2016-02-29 | 2018-07-17 | International Business Machines Corporation | Firmware management of SR-IOV adapters |
US9875060B1 (en) | 2016-10-21 | 2018-01-23 | International Business Machines Corporation | Migrating MMIO from a source I/O adapter of a source computing system to a destination I/O adapter of a destination computing system |
US10209918B2 (en) | 2016-10-21 | 2019-02-19 | International Business Machines Corporation | Migrating MMIO from a source I/O adapter of a source computing system to a destination I/O adapter of a destination computing system |
US10417150B2 (en) | 2016-10-21 | 2019-09-17 | International Business Machines Corporation | Migrating interrupts from a source I/O adapter of a computing system to a destination I/O adapter of the computing system |
US9760512B1 (en) * | 2016-10-21 | 2017-09-12 | International Business Machines Corporation | Migrating DMA mappings from a source I/O adapter of a source computing system to a destination I/O adapter of a destination computing system |
US9785451B1 (en) | 2016-10-21 | 2017-10-10 | International Business Machines Corporation | Migrating MMIO from a source I/O adapter of a computing system to a destination I/O adapter of the computing system |
US9916267B1 (en) | 2016-10-21 | 2018-03-13 | International Business Machines Corporation | Migrating interrupts from a source I/O adapter of a source computing system to a destination I/O adapter of a destination computing system |
US9892070B1 (en) | 2016-10-21 | 2018-02-13 | International Business Machines Corporation | Migrating interrupts from a source I/O adapter of a computing system to a destination I/O adapter of the computing system |
US9830171B1 (en) | 2016-10-21 | 2017-11-28 | International Business Machines Corporation | Migrating MMIO from a source I/O adapter of a computing system to a destination I/O adapter of the computing system |
US10360058B2 (en) * | 2016-11-28 | 2019-07-23 | International Business Machines Corporation | Input/output component selection for virtual machine migration |
US10579437B2 (en) | 2016-12-01 | 2020-03-03 | International Business Machines Corporation | Migrating a logical partition with a native logical port |
US20220129299A1 (en) * | 2016-12-02 | 2022-04-28 | Vmware, Inc. | System and Method for Managing Size of Clusters in a Computing Environment |
US10942758B2 (en) * | 2017-04-17 | 2021-03-09 | Hewlett Packard Enterprise Development Lp | Migrating virtual host bus adaptors between sets of host bus adaptors of a target device in order to reallocate bandwidth to enable virtual machine migration |
US10956242B1 (en) * | 2017-12-06 | 2021-03-23 | Amazon Technologies, Inc. | Automating the migration of web service implementations to a service provider system |
US11474857B1 (en) * | 2020-05-06 | 2022-10-18 | Amazon Technologies, Inc. | Accelerated migration of compute instances using offload cards |
US11870647B1 (en) | 2021-09-01 | 2024-01-09 | Amazon Technologies, Inc. | Mapping on-premise network nodes to cloud network nodes |
US12124878B2 (en) | 2022-03-17 | 2024-10-22 | Iii Holdings 12, Llc | System and method for scheduling resources within a compute environment using a scheduler process with reservation mask function |
Also Published As
Publication number | Publication date |
---|---|
GB2509463B (en) | 2020-06-17 |
GB2509463A (en) | 2014-07-02 |
US20130086200A1 (en) | 2013-04-04 |
DE112012003776T5 (en) | 2014-06-18 |
WO2013049990A1 (en) | 2013-04-11 |
GB201407143D0 (en) | 2014-06-04 |
US9588807B2 (en) | 2017-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9588807B2 (en) | Live logical partition migration with stateful offload connections using context extraction and insertion | |
US8830870B2 (en) | Network adapter hardware state migration discovery in a stateful environment | |
US11372802B2 (en) | Virtual RDMA switching for containerized applications | |
US8660124B2 (en) | Distributed overlay network data traffic management by a virtual server | |
US8819211B2 (en) | Distributed policy service | |
US9092274B2 (en) | Acceleration for virtual bridged hosts | |
US20120291024A1 (en) | Virtual Managed Network | |
US8937940B2 (en) | Optimized virtual function translation entry memory caching | |
US20130034094A1 (en) | Virtual Switch Data Control In A Distributed Overlay Network | |
US8954704B2 (en) | Dynamic network adapter memory resizing and bounding for virtual function translation entry storage | |
US9001696B2 (en) | Distributed dynamic virtual machine configuration service | |
US9910687B2 (en) | Data flow affinity for heterogenous virtual machines | |
US20130097600A1 (en) | Global Queue Pair Management in a Point-to-Point Computer Network | |
US10911405B1 (en) | Secure environment on a server | |
US20130091501A1 (en) | Defining And Managing Virtual Networks In Multi-Tenant Virtualized Data Centers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ALANIS, FRANCISCO JESUS;CARDONA, OMAR;REEL/FRAME:027015/0648 Effective date: 20110927 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |