US11496365B2 - Automated access to racks in a colocation data center - Google Patents
Automated access to racks in a colocation data center Download PDFInfo
- Publication number
- US11496365B2 US11496365B2 US16/695,696 US201916695696A US11496365B2 US 11496365 B2 US11496365 B2 US 11496365B2 US 201916695696 A US201916695696 A US 201916695696A US 11496365 B2 US11496365 B2 US 11496365B2
- Authority
- US
- United States
- Prior art keywords
- rack
- network
- virtual
- request
- data center
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 239000004744 fabric Substances 0.000 claims abstract description 134
- 238000000034 method Methods 0.000 claims abstract description 115
- 230000004044 response Effects 0.000 claims abstract description 80
- 230000015654 memory Effects 0.000 claims description 123
- 238000012545 processing Methods 0.000 claims description 50
- 239000000835 fiber Substances 0.000 claims description 14
- 238000012544 monitoring process Methods 0.000 claims 1
- 230000008569 process Effects 0.000 description 34
- 238000004891 communication Methods 0.000 description 31
- 238000003860 storage Methods 0.000 description 18
- 238000007726 management method Methods 0.000 description 11
- 230000007246 mechanism Effects 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 230000009471 action Effects 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 239000000969 carrier Substances 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000006855 networking Effects 0.000 description 2
- 230000005641 tunneling Effects 0.000 description 2
- 241000699670 Mus sp. Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 239000003999 initiator Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002070 nanowire Substances 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0853—Network architectures or network communication protocols for network security for authentication of entities using an additional device, e.g. smartcard, SIM or a different communication terminal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4604—LAN interconnection over a backbone network, e.g. Internet, Frame Relay
- H04L12/462—LAN interconnection over a bridge based backbone
- H04L12/4625—Single bridge functionality, e.g. connection of two networks over a single bridge
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4633—Interconnection of networks using encapsulation techniques, e.g. tunneling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4641—Virtual LANs, VLANs, e.g. virtual private networks [VPN]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/46—Interconnection of networks
- H04L12/4641—Virtual LANs, VLANs, e.g. virtual private networks [VPN]
- H04L12/4675—Dynamic sharing of VLAN information amongst network nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0876—Aspects of the degree of configuration automation
- H04L41/0886—Fully automatic configuration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0895—Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/111—Switch interfaces, e.g. port details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/08—Network architectures or network communication protocols for network security for authentication of entities
- H04L63/0876—Network architectures or network communication protocols for network security for authentication of entities based on the identity of the terminal or configuration, e.g. MAC address, hardware or software configuration or device fingerprint
Definitions
- At least some embodiments disclosed herein relate to computer access and configuration in general and more particularly, but not limited to configuring networks and/or controlling access to switches or other computing devices in a data center.
- a data center is a physical facility that houses computing systems and related networking equipment.
- a service provider can house its computer servers at one physical location in order to manage the servers more efficiently.
- the servers in a data center are typically connected to users of the computer servers via the Internet, or a wide area network (WAN).
- the computer servers in the data center typically host applications and provide services.
- the computer servers and other related components such as network switches, routers, etc., in a data center are housed in metallic cages referred to as racks.
- a rack includes a chassis to house the computer servers.
- a computer server in the form of a blade is mounted to the chassis.
- the rack has a wire harness for network cables that connect each blade to a computer network. Other cables provide power to each blade.
- each server mounted in the rack may be configured to host one or more virtual machines.
- the servers in the rack are connected to top-of-rack (TOR) switch devices.
- the TOR switches are connected to other TOR switches via a spine switch or spine underlay fabric. This provides a physical network that can be used by multiple tenant networks to exchange data communications between host devices in different rack units. For example, packets of data may be sent from a virtual machine in one rack unit to a virtual machine in another rack unit. The packets can be routed between corresponding TOR switch devices and an intermediary spine switch.
- the TOR switches are configured to store address information associated with the host devices in the data center environment.
- TOR switches typically manage communications (e.g., routing and forwarding) that originate from and/or destined for physical servers (and virtual machines and virtual switches hosted by the physical servers) in a rack.
- Each TOR switch can be configured to communicate with a network controller unit that manages communications between TOR switches in different racks.
- tenant networks residing in an underlay fabric can be created, modified, provisioned, and/or deleted.
- virtual switches and virtual machines are created and run on each physical server on top of a hypervisor.
- Each virtual switch can be configured to manage communications of virtual machines in a particular virtual network.
- Each virtual machine is a member of a tenant network (e.g., a layer 3 subnet that contains one or more VLANs).
- a TOR switch includes network ports for receiving and sending data packets to and from physical servers mounted in the racks.
- the ports are coupled to a switch application specific integrated circuit (ASIC) that enables packets received on one port to be forwarded to a device in the system via a different port.
- ASIC application specific integrated circuit
- the TOR switches are used in a hyper-converged infrastructure (HCI) computing environment.
- HCI hyper-converged infrastructure
- the HCI computing environment can include thousands of devices such as servers and network switches.
- HCI services can be used to configure the network switches.
- IP internet protocol
- an HCI management service maintains a listing of network configurations applied to network switches in various racks.
- the management service accesses a listing of network configurations applied to a first network switch, and dynamically applies the network configurations to a second network switch.
- a first network switch resides in a slot on a rack of a data center.
- the HCI management service uses a data store or other memory to maintain network configurations that have been applied to the first network switch.
- the network configurations may include switch bring-up configurations, management cluster configurations, and workload configurations.
- FIG. 1 shows an example data center that includes a network fabric connecting top-of-rack switches for racks in which various computing equipment is mounted, according to one embodiment.
- FIG. 2 shows an example server including server hardware executing a hypervisor that supports virtual machines, according to one embodiment.
- FIG. 3 shows an example computing device running virtual machines that connect to ports of a virtual switch, according to one embodiment.
- FIG. 4 shows a method for configuring a top-of-rack switch that is connected to a network fabric of a data center, according to one embodiment.
- FIG. 5 shows a method for connecting a group of networks to a group of racks in response to a configuration selection received from a user by input into a user interface, according to one embodiment.
- FIG. 6 shows a block diagram of a computing device, which can be used in various embodiments.
- FIG. 7 shows a block diagram of a computing device, according to one embodiment.
- FIG. 8 shows an example data center including a network fabric that is configured to provide IP services to one or more racks in the data center, according to one embodiment.
- FIG. 9 shows a method for providing IP services to racks in a data center, according to one embodiment.
- FIG. 10 shows an example building that houses racks of a data center and uses doors and locks to control access to the racks, according to one embodiment.
- FIG. 11 shows a method for configuring a TOR switch to connect a server and virtual networks of a network fabric to one or more ports of the TOR switch, according to one embodiment.
- FIG. 12 shows a method for controlling physical access to a rack in a data center, according to one embodiment.
- the network switches are top-of-rack (TOR) switches.
- TOR top-of-rack
- other types of network switches can be configured.
- the TOR switches are connected to a network fabric of the data center.
- the network fabric connects TOR switches used in various racks that are housed in the data center. Each rack mounts various computing hardware such as physical servers, routers, etc.
- the internet connectivity includes internet protocol (IP) services provided on demand in real-time to various customers that install computing equipment in racks of the data center.
- IP internet protocol
- the customers can request the internet connectivity using a portal.
- TOR switches for the racks can be configured as described below (e.g., using the same portal).
- the embodiments regarding deploying internet connectivity are described in the section below titled “Automated Deployment of Internet Connectivity”.
- a request is received to configure a TOR switch in a rack of a customer of the data center.
- the data center automatically configures the TOR switch to connect a server to one or more virtual networks in a network fabric of the data center.
- physical access to the racks of a customer is controlled by the data center.
- a request to access a rack is received from a client device of the customer.
- the customer is provided physical access to its racks.
- the physical access is provided by automatically unlocking one or more doors (and/or configuring other physical access capability) that permit the customer to physically access the racks.
- IT information technology
- provisioning of new hardware servers and applications that run on the servers For example, it can take three to six months to deploy a single application, including provisioning of circuits, building out infrastructure in a colocation cage, installation and configuration of the hypervisor, and loading and testing of the application.
- the time to provision network connectivity and services often constrains colocation deployments of new workloads or applications.
- Another problem is difficulty in accurately forecasting bandwidth and overall IT capacity requirements more than a few months in advance. This results in many organizations initially over-provisioning to assure that adequate bandwidth and compute resources are available as demand grows.
- a method includes: mounting a switch in a rack (e.g., a TOR switch of a rack in a data center), wherein the rack is configured for mounting a server connected to the switch; connecting the switch to a network fabric; receiving, by a switch configuration manager from a client device (e.g., a client device of a service provider customer that is deploying new IT infrastructure in the data center), instructions to create a virtual network; in response to receiving the instructions, creating the virtual network; and configuring, by the switch configuration manager and based on the instructions, the switch to associate the virtual network with the switch.
- a switch configuration manager from a client device (e.g., a client device of a service provider customer that is deploying new IT infrastructure in the data center), instructions to create a virtual network; in response to receiving the instructions, creating the virtual network; and configuring, by the switch configuration manager and based on the instructions, the switch to associate the virtual network with the switch.
- the switch configuration manager is software executed by a computing device connected to the network fabric of a data center that houses racks of computer hardware, including the rack above.
- the switch configuration manager configures the TOR switches for all racks physically located in the data center.
- the switch configuration manager is accessed by service provider customers of the data center using an application programming interface (API) of the switch configuration manager.
- API application programming interface
- a client device for each customer can use the API to configure the TOR switches for its racks when the customer is deploying new IT infrastructure in the data center.
- the virtual network is a first virtual network
- the method further includes: receiving, from a user interface of the client device, a request to create a group of networks, the group including the first virtual network; in response to receiving the request, creating the group of networks; and in response to receiving a user selection made in the user interface, connecting the group of networks to a group of racks, the connecting including automatically configuring ports of a TOR switch for each rack in the group of racks to provide access, by a respective server in each rack, to each network in the group of networks.
- a service provider or other customer is provided a user interface (UI) and an API.
- the service provider is a cloud service provider, a software as a service (SaaS) provider, or a managed hosting provider.
- the UI presents customer ports, compute nodes, and other elements connected to the network fabric of the data center.
- the customer can create virtual networks or groups of virtual networks using the UI.
- the customer can bundle several virtual networks into a defined group (and optionally assign a text label to the group).
- the customer can then use the UI to connect the defined group between racks and other computing devices.
- Data center automation software e.g., executing on a virtual server of the data center) examines data for the group and configures connections for the virtual networks in the group as needed.
- the data center automation software manages network connections to a customer's racks.
- the customer can use a portal (e.g., provided by a user application executing on a client device such as a mobile device) to connect a group of networks to a group of racks.
- each rack has a unique ID.
- the customer can see rack data, including location by metro region, on a display of its client device using the UI.
- the customer can also see IP connectivity instances (e.g., by metro or other geographic region) and ports in a metro or other geographic region that can be used to receive services over the network fabric. For example, multiple racks can all access the same IP connectivity instance.
- the portal displays endpoints and connections on the customer's client device, and the portal manages the relationship between the endpoints and connections.
- the portal provides control by the customer of a grouping mechanism for the customer's racks.
- the customer can manage network connections to its racks.
- the customer requests that a group of networks be connected to a group of racks.
- the data center automation software configures these connections.
- the data center automation software includes the switch configuration manager described above.
- a customer creates a group, and assigns networks to the group.
- the association of the networks to the group is tracked by the data center automation software.
- all networks that are part of the group are examined, and individual configurations are implemented as required to make the new connection.
- devices and ports to be connected to the above networks are identified.
- endpoints are determined, and work required to implement the connections is identified as one or more workflows.
- a workflow engine e.g., software executing on a virtual machine of an administrator computing device of the data center executes tasks in the workflows.
- colocation racks can be delivered to the customer faster than when using prior approaches.
- the racks are standalone racks that include power, a locking mechanism for each rack, and network switches that are tied to the network fabric of the data center.
- IP transit is provided for servers in the racks for internet connectivity.
- a customer signs a service agreement and the customer is added to an authentication service used in the data center.
- the authentication service manages access by and identifies the customer for the data center.
- the customer logs into a command center of the data center (e.g., the command center can be implemented by software that includes the switch configuration manager above).
- the customer selects a data center location, and specifies an order for a quantity of racks (e.g., from one rack to a predetermined limit).
- the command center performs various actions.
- the command center maintains a database of available rack inventory at various geographic data center locations worldwide.
- the command center allocates racks from available inventory in the location selected by the customer.
- the authentication service is updated with rack assignment information corresponding to these allocated racks.
- a security system at each physical data center facility where the selected racks are located is updated so that the customer is allowed to physically access the racks.
- a lock system used on the racks is configured to allow the customer to access the selected racks.
- IP connectivity e.g., to provide internet access
- the portal is updated with the locations of the selected racks, TOR switch information for the racks, and IP connectivity information (e.g., VLAN, subnet, and default gateway configuration information) for the racks.
- Billing of the customer for the colocation service is initiated (e.g., by electronic communication). Finally, the customer is notified by electronic communication or otherwise when the foregoing provisioning is complete.
- the customer can perform various actions.
- the customer accesses the command center to complete user setup, including uploading or taking a photo via the portal.
- the customer accesses the command center using the client device above.
- the client device is a mobile device having a camera and is used to take a photo of personnel associated with the customer. The photo is uploaded to the command center via the API above.
- the customer When the customer physically arrives at a data center location, the customer checks in with security to receive a badge.
- the badge includes the photo previously provided by the customer above.
- the customer enters the facility and unlocks the selected racks using the badge.
- the badge contains security credentials necessary to unlock the locking mechanism on the selected racks.
- the customer installs computing equipment in the selected racks, and then cables the equipment to the TOR switches above.
- the customer accesses the command center and configures ports of the TOR switches.
- the switch ports are configured with a virtual local area network (VLAN) configuration desired for use by the customer.
- VLAN virtual local area network
- the operator of the data center buys hardware equipment and installs it in racks.
- the equipment is made available to customers on demand. This permits customers to avoid having to build equipment for peak demand.
- a customer can purchase computing resources that are supported by this equipment.
- the purchased computing resources are based on a hyper-converged infrastructure (HCI).
- HCI hyper-converged infrastructure
- the customer can use the portal above to select computing resources.
- the computing resources are connected to one or more virtual networks configured by the customer using the portal. The command center above configures the TOR switches to connect these virtual networks to the hardware equipment of the data center.
- an on-demand IT infrastructure is provided to customers.
- the infrastructure is provided using an on-demand consumption model.
- the infrastructure is a physically-isolated on-demand hyper-converged infrastructure.
- the network fabric is a software-defined network fabric that provides connectivity via a secure layer 2 network throughout the data center. The customer can request access to network providers with direct connections to private or public cloud resources.
- a customer installs its own equipment in a first rack.
- the customer configures the TOR switches of the first rack using a portal as described above.
- the command center above configures ports of the TOR switches to implement the configuration requested by the customer.
- the customer can configure and deploy equipment in a second rack that has been pre-installed and is owned by the operator of the data center.
- the second rack includes equipment that provides a so-called “compute node” for deployment by the customer.
- the compute node is a dedicated self-contained HCI unit that combines computer resources (e.g., CPU cores), memory resources (e.g., RAM), and storage resources (e.g., hard disk drive and solid-state disk) into a pre-configured integrated appliance.
- a group of compute nodes forms a cluster.
- the compute nodes provide dedicated hardware for a customer upon which the customer can deploy its desired hypervisor. The customer can then configure and manage the resources and virtual machines needed to run desired workloads.
- the customer uses the portal above to create one or more virtual networks that connect one or more servers of the first rack to one or more servers of the second rack.
- the first rack and second rack can be in different data centers.
- the network fabric of the data center above is a software-defined network fabric to link customers and resources throughout the data center.
- the network fabric uses an architecture to assure that each customer's traffic is logically isolated and protected through the use of a virtual extensible local area network (VXLAN) protocol.
- VXLAN virtual extensible local area network
- the client device of the customer can define, provision, and configure private virtual layer 2 networks.
- logical services are delivered to servers in a rack of the customer as virtual networks using VXLANs.
- all physical connections are delivered with an Ethernet layer 2 interface.
- multiple services are delivered to customer servers over a single physical connection.
- the physical connection is a physical port implemented using single-mode fiber operating at 1-10 Gbps.
- automated configuration of network switches in a data center can provide one or more various advantages.
- customer colocation access can be automated and provided more quickly than using prior approaches.
- colocation access can be provided in less than 48 hours (e.g., the same day) from receipt of the initial request by the customer.
- deployment of Internet connectivity to rack switches can be automated.
- multiple security systems and multiple rack switches can be configured simultaneously.
- self-service configuration of TOR switches across multiple racks can be provided.
- FIG. 1 shows an example data center that includes a network fabric 101 connecting top-of-rack (TOR) switches 105 , 157 for racks 103 , 155 in which various computing equipment is mounted, according to one embodiment.
- the computing equipment mounted in rack 103 includes the TOR switch 105 , and also servers 107 , 109 , and router 113 .
- Rack 103 has a slot 111 in which additional equipment can be mounted (e.g., slot 111 and/or other slots can be used by a customer of the data center to install customer-owned equipment in rack 103 ).
- TOR switch 105 includes memory 106 and various ports (e.g., port 108 ) for receiving and sending communications (e.g., data packets).
- Memory 106 stores a network configuration (e.g., port connection assignments) as implemented by switch configuration manager 127 over network fabric 101 in response to a customer request received over a portal 133 .
- Various ports of TOR switch 105 connect to router 113 and/or servers 107 , 109 .
- Other ports of TOR switch 105 connect to one or more virtual networks 121 , 123 of network fabric 101 .
- all communications between rack 103 and network fabric 101 pass through a physical fiber port 104 (e.g., implemented using single-mode fiber).
- Rack 155 mounts computer equipment including the TOR switch 157 , servers 165 , 167 , and router 163 .
- Rack 155 includes a slot 169 for adding additional equipment.
- TOR switch 157 includes memory 159 and various ports, including port 161 .
- all communications to and from the network fabric 101 pass through a physical fiber port 153 .
- memory 159 is used to store data regarding a configuration of TOR switch 157 as automatically implemented by switch configuration manager 127 . In one example, this configuration is implemented in response to a selection made by a customer in a user interface of client device 137 .
- the data center of FIG. 1 can include numerous other racks connected to network fabric 101 using physical fiber ports and/or other types of connections.
- the virtual networks 121 , 123 of network fabric 101 can overlay various types of physical network switches.
- network fabric 101 comprises network switches 147 that are used to implement virtual extensible local area networks (VXLANs) 142 for transmission of data from a server of rack 103 to a server mounted in a different rack, such as rack 155 .
- VXLANs virtual extensible local area networks
- a virtual network connected to TOR switch 105 is converted into a VXLAN 142 for transmission of data from server 107 to server 165 .
- the VXLAN 142 is used to transmit the data to another virtual network connected to TOR switch 157 .
- VXLANs 142 can be configured by switch configuration manager 127 to implement the foregoing connection between servers. In one embodiment, this configuration is implemented in response to a request from client device 137 to add server 165 to a virtual network that includes server 107 .
- network fabric 101 includes spine switches 139 as part of a physical switching fabric.
- Spine switches 139 include management ports 141 , which can be used by switch configuration manager 127 to configure spine switches 139 .
- network fabric 101 is a leaf-spine data center switching fabric.
- network fabric 101 is a software-defined network (SDN) controller-based data center switching fabric.
- the switching fabric supports all workloads (e.g., physical, virtual machine, and container) and choice of orchestration software.
- the switching fabric provides layer 2 (L2) switching, and layer 3 (L3) routing.
- the switching fabric is scalable, resilient, has no single point of failure, and/or supports headless mode operations.
- a computing device 115 (e.g., a server or virtual machine) is connected to network fabric 101 .
- Computing device 115 executes a hyper-converged management service 117 , which can be used to allocate compute, memory, and/or storage resources provided by various racks, including rack 103 and/or rack 155 .
- Data store 119 is used to store data regarding this allocation of resources.
- a customer installs its own equipment into rack 103 .
- the customer sends a request for additional resources to add to its computing environment in the data center.
- hyper-converged management service 117 allocates resources of servers in rack 155 for use by the customer.
- virtual machines are created on rack 155 for handling workloads of the customer.
- a computing device 125 is connected to network fabric 101 .
- Switch configuration manager 127 executes on computing device 125 and performs various administrative functions for the data center (e.g., functions as described above). Some of the functions performed by switch integration manager 127 are responsive to communications received from client device 137 over an external network 135 through portal 133 .
- Client device 137 uses API 132 of switch configuration manager 127 for these communications.
- Client device 137 also receives communications from switch configuration manager 127 using API 132 .
- one or more of the communications cause a display of information in a user interface of client device 137 .
- the user interface uses the information to display a configuration of a computing environment of a customer of the data center.
- switch configuration manager 127 in response to a communication from client device 137 , creates and/or configures various virtual networks of network fabric 101 (e.g., virtual networks 121 , 123 , and/or VXLANs 142 ). In one example, certain virtual networks are assigned to a group as designated by a customer using client device 137 . Data regarding creation and/or configuration of virtual networks (e.g., assignment of virtual networks to a group(s)) is stored in data store 131 .
- a customer of the data center can use client device 137 to request internet connectivity for one or more racks in its computing environment.
- the customer can request that internet connectivity be provided for use by servers 107 , 109 .
- Communications with client device 137 regarding internet connectivity also can be performed using API 132 .
- internet configuration manager 129 can configure IP services 143 to provide this internet connectivity.
- Internet configuration manager 129 communicates configuration data needed by switch configuration manager 127 for configuring TOR switch 105 so that servers 107 , 109 are connected to IP services 143 , which provides the internet connectivity. Configuration data regarding this internet connectivity can also be stored in data store 131 .
- the customer can request that one or more telecommunications carriers 145 be connected to racks in its computing environment (e.g., rack 103 or rack 155 ).
- the customer can request that servers in rack 103 or rack 155 be connected to a software-defined wide area network (SD-WAN) 149 .
- SD-WAN 149 is used by a customer to extend its computer networks over large distances, to connect remote branch offices to data centers and each other, and/or to deliver applications and services required to perform various business functions.
- the customer can request compute services 151 .
- compute services 151 include one or more virtual machines created for use in the customer's computing environment.
- the virtual machines are created and run on servers in racks of the data center.
- hyper-converged management service 117 can create and manage these virtual machines.
- compute services 151 include storage resources.
- the storage resources can be non-volatile memory devices mounted in racks of the data center (e.g., mounted in rack 155 ).
- a virtualization control system (e.g., implemented by hyper-converged management service 117 or otherwise by computing device 115 ) abstracts server, storage, and network hardware resources of the data center to provide a more granular virtual server, virtual storage, and virtual network resource allocation that can be accessed by a customer.
- a customer console provisioning interface is coupled to the virtualization control system to permit the customer to configure its new environment.
- the virtualization control system responds to requests received from client device 137 .
- portal 133 is a web portal.
- Client device 137 provides a user interface that enables a customer/user to associate a specified network connection with a new computing environment.
- the new computing environment can be associated with a number of virtual machines that is specified in the user interface.
- a customer can use the user interface to create, provision, and manage its virtual resources across numerous virtual environments (which may physically span multiple physical data centers). For example, some virtual servers are physically located on hardware in a first physical data center, and other virtual servers are physically located in a second physical data center. In one example, the difference in physical location is irrelevant to the customer because the customer is presented an abstracted view of data center assets that span multiple virtualization control systems and multiple geographic locations.
- the above user interface enables a customer/user to add a network to a newly-created environment.
- the network is given a name and a VLAN identifier.
- the customer can create and place a new virtual server within the new environment.
- the customer can configure processing, memory, and storage resources to be associated with the new virtual server being created.
- the new server can then be deployed to the customer environment.
- the customer uses the user interface to perform configuration tasks for the new virtual server (e.g., providing a server name, selecting a number of processors to be associated with the virtual server, selecting an amount of system memory to be associated with the virtual server).
- the customer selects an operating system to associate with the new server.
- a customer can create groups of virtual servers. For example, customers can organize servers by function (e.g., a group of web servers, a group of SQL servers). The customer selects a particular virtual network (e.g., virtual network 121 ) to associate with the virtual server (e.g., a virtual machine running on server 107 or server 165 ), and then provides details of the IP address and DNS settings for the virtual server.
- a virtual network e.g., virtual network 121
- the virtual server e.g., a virtual machine running on server 107 or server 165
- public IP addresses can be displayed in the user interface on client device 137 .
- Another display screen can allow a user to examine assignments of private IPs to different virtual servers that have been configured.
- the user interface on client device 137 can be used to create an Internet service.
- the user selects a public IP address and a protocol.
- the user may then select a port value and a service name.
- a service description may be provided.
- a list of Internet services that have been provisioned for the IP address can be displayed in the interface.
- the provisioned services can include, for example, an FTP service, an SMTP service, etc.
- Within each service are listed the nodes (e.g., virtual servers) that have been created and associated with a particular Internet service, as well as the protocol and port.
- switch configuration manager 127 can access the above customer environments (e.g., to add a network to a customer environment).
- FIG. 2 shows server 107 of FIG. 1 , according to one embodiment.
- Server 107 includes server hardware 201 that executes a hypervisor 209 .
- the hypervisor 209 supports virtual machines 213 , 215 .
- the server hardware 201 includes a processor 203 , memory 205 , and a network interface controller (NIC) 207 .
- NIC 207 connects server 107 to a port of TOR switch 105 . Another port of TOR switch 105 is connected to network fabric 101 .
- Virtual machines 213 , 215 generally communicate with network fabric 101 using TOR switch 105 .
- Virtual machine 213 has a virtual NIC 217
- virtual machine 215 has a virtual NIC 219 .
- virtual NICs 217 , 219 connect virtual machines 213 , 215 to one or more virtual networks 121 of network fabric 101 .
- virtual machine 213 is associated with VLANs 223 of network fabric 101 .
- VLANs 223 may have been created by a customer of the data center that itself has installed server 107 in rack 103 .
- the customer installs server 107 after switch configuration manager 127 has configured one or more ports of TOR switch 105 in response to one or more communications from client device 137 .
- a locking mechanism on rack 103 does not permit entry by the customer until this configuration of TOR switch 105 has been completed by switch configuration manager 127 .
- Hypervisor 209 also supports a virtual switch 211 .
- Virtual machines 213 , 215 are connected to ports of virtual switch 211 .
- virtual switch 211 also has one or more ports associated with VLANs 221 of network fabric 101 .
- FIG. 3 shows an example computing device 300 running virtual machines 303 , 305 , 307 that connect to various ports of a virtual switch 301 , according to one embodiment.
- Computing device 300 is an example of server 107 of FIG. 2 .
- the ports of virtual switch 301 are provided in various groups (e.g., Port Group A, B, C, D, E).
- virtual machines 303 , 305 are connected to Port Group A via virtual NICs 309 , 311 .
- Virtual machine 307 is connected to Port Group E via virtual NIC 313 .
- each port group corresponds to a virtual network.
- Virtual switch 300 is an example of virtual switch 211 of FIG. 2 .
- each port group corresponds to one of VLANs 223 of FIG. 2 .
- computing device 300 is an example of computing device 125 of FIG. 1 .
- Switch configuration manager 127 and/or internet configuration manager 129 can be implemented using virtual machines 303 , 305 , and/or 307 .
- virtual machine 307 is used to implement portal 133 for communications with client device 137 using API 132 .
- computing device 300 is used to implement compute services 151 of FIG. 1 .
- a customer can use client device 137 to request that one or more of virtual machines 303 , 305 , 307 be allocated to the customer's computing environment.
- one or more virtual networks of the customer's computing environment are connected to one or more of virtual machines 303 , 305 , 307 .
- Port Group A corresponds to a group of virtual machines requested by the customer using client device 137 .
- Port Group A corresponds to a group that is created in response to a customer request.
- FIG. 4 shows a method for configuring a top-of-rack switch (e.g., TOR switch 105 of FIG. 1 ) that is connected to a network fabric (e.g., network fabric 101 of the data center of FIG. 1 ), according to one embodiment.
- a top-of-rack switch e.g., TOR switch 105 of FIG. 1
- a network fabric e.g., network fabric 101 of the data center of FIG. 1
- the method of FIG. 4 can be implemented in the system of FIGS. 1, 2, and 3 .
- processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
- hardware e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.
- software e.g., instructions run or executed on a processing device
- the method of FIG. 4 is performed at least in part by one or more processors of computing device 125 of FIG. 1 .
- computing device 125 is implemented using the processors and memory of FIG. 6 or 7 (see below).
- TOR switches are connected to a network fabric of a data center.
- Each TOR switch corresponds to a rack of the data center, and is configured to provide access to the network fabric for one or more computing devices mounted in the rack.
- the TOR switches are TOR switches 105 and 157 of FIG. 1
- the network fabric is network fabric 101 .
- the computing devices mounted in the rack include router 113 and server 107 , which are mounted by a customer after the rack has been assigned to the customer. This rack assignment occurs after the customer has requested the rack using client device 137 .
- a request is received from a client device via a portal.
- the request is to configure a first rack of the data center.
- the client device is client device 137
- the portal is portal 133 .
- configuration data is received from the client device.
- the configuration data is for one or more virtual networks to be accessed by a first computing device mounted in the first rack.
- the configuration data includes a specification of the devices and ports that a customer desires to connect to each of the virtual networks.
- the configuration data includes IP addresses associated with internet connectivity (e.g., provided by IP services 143 ).
- the configuration data includes a subnet mask and an identification of a gateway (e.g., for use in configuring a router).
- the virtual networks include virtual networks 121 and 123 of FIG. 1 .
- a first TOR switch of the first rack is configured.
- This configuration includes associating the one or more virtual networks with the first TOR switch.
- switch configuration manager 127 configures TOR switch 105 of rack 103 .
- This configuration includes associating virtual networks 121 with TOR switch 105 .
- a method comprises: mounting a switch (e.g., TOR switch 105 ) in a rack (e.g., rack 103 ), wherein the rack is configured for mounting a server (e.g., server 107 ) connected to the switch; connecting the switch to a network fabric (e.g., network fabric 101 ); receiving, by a switch configuration manager (e.g., switch configuration manager 127 ) from a client device (e.g., client device 137 ), instructions to create a virtual network (e.g., one of virtual networks 121 ); in response to receiving the instructions, creating the virtual network; and configuring, by the switch configuration manager and based on the instructions, the switch to associate the virtual network with the switch.
- a switch configuration manager e.g., switch configuration manager 127
- the method further comprises converting the virtual network into a virtual extensible local area network (e.g., one of VXLANs 142 ) for transmission of data from the server over the network fabric to a server mounted in a different rack.
- a virtual extensible local area network e.g., one of VXLANs 142
- the rack is a first rack
- the server is a first server
- the switch is a first switch.
- the method further comprises: receiving, by the switch configuration manager from the client device, instructions to associate the virtual network with a second server mounted in a second rack (e.g., rack 155 ); and in response to receiving the instructions to associate the virtual network with the second server, configuring a second switch (e.g., TOR switch 157 ) of the second rack to associate the VXLAN with the second switch.
- a second switch e.g., TOR switch 157
- the virtual network is a first virtual network
- the method further comprises: receiving, from the client device, an instruction to create a second virtual network associated with the second server; and in response to receiving the instruction to create the second virtual network, configuring the network fabric to associate the second virtual network with the second server.
- the virtual network is a first virtual network
- the method further comprises: receiving, from the client device, an instruction to create a group including the first virtual network and a second virtual network; in response to receiving the instruction to create the group, storing data regarding the group in a data store (e.g., data store 131 ) that stores configuration data for switches in the network fabric; receiving, from the client device, an instruction to connect a virtual server to the group; and in response to receiving the instruction to connect the virtual server to the group, configuring at least one switch of the network fabric to associate the virtual server with the first virtual network and the second virtual network.
- a data store e.g., data store 131
- the rack is a first rack in a first data center at a first geographic location
- the virtual network is a first virtual network.
- the method further comprises: receiving, from the client device, an instruction to create a second virtual network; in response to receiving the instruction to create the second virtual network, configuring the network fabric to create the second virtual network; receiving an instruction to create a group including the first virtual network and the second virtual network; in response to receiving the instruction to create the group, updating, by the switch configuration manager, a data store (e.g., data store 131 ) to track membership of the first virtual network and the second virtual network in the group; receiving, from the client device, an instruction to connect the group to a second rack in a second data center at a second geographic location; and in response to receiving the instruction to connect the group to the second rack, configuring the network fabric to associate the second virtual network with a switch of the second rack.
- a data store e.g., data store 131
- a method comprises: connecting top-of-rack (TOR) switches to a network fabric of at least one data center (e.g., the data center of FIG. 1 ), wherein each TOR switch corresponds to a respective rack of the at least one data center, and is configured to provide access to the network fabric for computing devices mounted in the respective rack; receiving, from a client device via a portal (e.g., portal 133 ), a request to configure a first rack of the at least one data center; receiving, from the client device, configuration data for at least one first virtual network to be accessed by a first computing device mounted in the first rack; and in response to receiving the configuration data, configuring a first TOR switch of the first rack, the configuring including associating the at least one first virtual network with the first TOR switch.
- TOR top-of-rack
- the computing devices are physical servers (e.g., server 107 of FIG. 2 ) configured to run virtual servers (e.g., virtual machines 213 , 215 ), and the physical servers include a first physical server configured to run a first virtual server, the method further comprising configuring a virtual extensible local area network (VXLAN) of the network fabric to connect the first TOR switch to a second TOR switch of a second rack of the at least one data center, wherein the VXLAN is configured to transmit data from the first virtual server to a second virtual server running on a second physical server mounted in the second rack.
- VXLAN virtual extensible local area network
- each of the computing devices is a physical server, a network device, or a storage device; and the first TOR switch comprises at least one port, and configuring the first TOR switch comprises configuring the at least one port based on the configuration data.
- the first rack comprises a second TOR switch.
- a first port of the first TOR switch and a second port of the second TOR switch are configured for connection to the first computing device.
- the first TOR switch comprises a port
- configuring the first TOR switch comprises associating a virtual local area network (VLAN) with the port.
- VLAN virtual local area network
- the method further comprises: causing display, in a user interface of the client device, of an identifier for the first rack, and a geographic location of the first rack, wherein the identifier for the first rack is stored in a data store, and wherein the user interface enables a user to request that at least one virtual network be created in the network fabric; and storing, in the data store, a name and an identifier for each of the created at least one virtual network.
- the method further comprises causing display, in a user interface of the client device, of availability of ports for each of a plurality of geographic locations in which racks, including the first rack, are located, wherein each of the ports provides a connection to at least one of IP services (e.g., IP services 143 ) or compute services (e.g., compute services 151 ) over the network fabric.
- IP services e.g., IP services 143
- compute services e.g., compute services 151
- configuring the first TOR switch further includes providing access for the first computing device to the IP services or compute services.
- the client device generates the configuration data based on inputs received by a user interface of the client device.
- the inputs include selection of an icon in the user interface that corresponds to the first rack, and selection of the icon causes presentation in the user interface of configuration options for the first TOR switch.
- the first computing device has a port configured to connect to the at least one virtual network.
- FIG. 5 shows a method for connecting a group of networks to a group of racks in response to a configuration selection received from a user by input into a user interface, according to one embodiment.
- the method of FIG. 5 can be implemented in the system of FIGS. 1, 2, and 3 .
- processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
- hardware e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.
- software e.g., instructions run or executed on a processing device
- the method of FIG. 5 is performed at least in part by one or more processors of computing device 125 of FIG. 1 .
- computing device 125 is implemented using the processors and memory of FIG. 6 or 7 (see below).
- a request is received from a client device.
- the request is based on input provided into a user interface of the client device.
- the request is to create a group of networks, where the group includes one or more virtual networks.
- the client device is client device 137 .
- the input is provided by a customer of the data center. The customer may provide a name which is assigned to the group.
- the group of networks is created.
- the group of networks is created by switch configuration manager 127 .
- the virtual networks that are assigned to the group are stored in data store 131 .
- the group of networks in response to receiving a configuration selection made in the user interface, is connected to a group of racks.
- the connecting includes automatically configuring ports of a TOR switch for each rack in the group racks to provide access, by a server of each rack, to each network in the group of networks.
- the group of networks is connected to the group of racks by switch configuration manager 127 .
- network fabric 101 and TOR switches 105 and 157 are configured to connect each network of the group to racks 103 and 155 .
- the group of networks includes virtual networks 121 and/or 123 .
- a method comprises: receiving, by a switch configuration manager (e.g., switch configuration manager 127 ) from a client device (e.g., client device 137 ), instructions to create a virtual network (e.g., one of virtual networks 121 ); in response to receiving the instructions, creating the virtual network; and configuring, by the switch configuration manager and based on the instructions, a switch (e.g., TOR switch 105 ) to associate the virtual network with the switch.
- a switch configuration manager e.g., switch configuration manager 127
- a switch e.g., TOR switch 105
- the virtual network is a first virtual network
- the method further comprises: receiving, from a user interface of the client device, a request to create a group of networks, the group including the first virtual network; in response to receiving the request, creating the group of networks; and in response to receiving a user selection made in the user interface, connecting the group of networks to a group of racks (e.g., racks 105 and 157 ), the connecting comprising automatically configuring ports of a TOR switch (e.g., TOR switches 105 and 157 ) for each rack in the group of racks to provide access, by a respective server in each rack, to each network (e.g., virtual networks 121 ) in the group of networks.
- a TOR switch e.g., TOR switches 105 and 157
- a method comprises: receiving, over a network, a request to configure a first rack of at least one data center; receiving, over the network, configuration data for at least one first virtual network to be accessed by a first computing device mounted in the first rack; and in response to receiving the configuration data, configuring a first TOR switch of the first rack, the configuring including associating at least one first virtual network with the first TOR switch.
- the method further comprises: receiving, from a user interface of a client device, a request to create a group of networks, the group including the at least one virtual network; in response to receiving the request, creating the group of networks; and in response to receiving a configuration selection made in the user interface, connecting the group of networks to a group of racks, the connecting comprising automatically configuring ports of a TOR switch for each rack in the group of racks to provide access, by a respective server in each rack, to each network in the group of networks.
- a system comprises: a network fabric to transmit data in at least one data center, wherein the at least one data center includes racks for mounting servers connected to the network fabric; network switches (e.g., TOR switches 105 , 157 ) connected to the network fabric, wherein each network switch corresponds to a respective one of the racks; a data store (e.g., data store 131 ) to store configuration data for the network switches; at least one processing device; and memory containing instructions configured to instruct the at least one processing device to: receive, via a portal from a client device, a request to create a computing environment supported on a plurality of racks (e.g., racks 103 , 155 ) connected by the network fabric, the plurality of racks including a first rack for mounting a physical server configured to communicate with a physical server of a second rack in the computing environment; create at least one virtual network (e.g., virtual networks 121 ) in the computing environment; and configure at least one of the network switches to associate the at least one data
- the instructions are further configured to instruct the at least one processing device to: receive, via a user interface of the client device, configuration selections associated with a new computing device in the computing environment; based on the configuration selections, configure processing resources, memory resources, and storage resources; and deploy the new computing device to the computing environment, wherein the new computing device is configured to run a virtual server connected to the at least one virtual network.
- FIG. 6 shows a block diagram of a computing device, which can be used in various embodiments. While FIG. 6 illustrates various components, it is not intended to represent any particular architecture or manner of interconnecting the components. Other systems that have fewer or more components may also be used.
- the computing device is a server. In one embodiment, several servers may be used and each reside on separate computing systems, or one or more may run on the same computing device, in various combinations.
- computing device 8201 includes an inter-connect 8202 (e.g., bus and system core logic), which interconnects a microprocessor(s) 8203 and memory 8208 .
- the microprocessor 8203 is coupled to cache memory 8204 in the example of FIG. 6 .
- the inter-connect 8202 interconnects the microprocessor(s) 8203 and the memory 8208 together and also interconnects them to a display controller and display device 8207 and to peripheral devices such as input/output (I/O) devices 8205 through an input/output controller(s) 8206 .
- I/O devices include mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices which are well known in the art.
- the inter-connect 8202 may include one or more buses connected to one another through various bridges, controllers and/or adapters.
- the I/O controller 8206 includes a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals.
- USB Universal Serial Bus
- the memory 8208 may include ROM (Read Only Memory), and volatile RAM (Random Access Memory) and non-volatile memory, such as hard drive, flash memory, etc.
- ROM Read Only Memory
- RAM Random Access Memory
- non-volatile memory such as hard drive, flash memory, etc.
- Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory.
- Non-volatile memory is typically a solid-state drive, magnetic hard drive, a magnetic optical drive, or an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after power is removed from the system.
- the non-volatile memory may also be a random access memory.
- the non-volatile memory can be a local device coupled directly to the rest of the components in the computing device.
- a non-volatile memory that is remote from the computing device such as a network storage device coupled to the computing device through a network interface such as a modem or Ethernet interface, can also be used.
- a computing device as illustrated in FIG. 6 is used to implement computing device 115 , computing device 125 , TOR switch 105 , server 107 , and/or other servers.
- a computing device as illustrated in FIG. 6 is used to implement a user terminal or a mobile device on which an application is installed or being installed.
- a user terminal may be in the form of, for example, a laptop or notebook computer, or a personal desktop computer.
- one or more servers can be replaced with the service of a peer to peer network of a plurality of data processing systems, or a network of distributed computing systems.
- the peer to peer network, or a distributed computing system can be collectively viewed as a computing device.
- Embodiments of the disclosure can be implemented via the microprocessor(s) 8203 and/or the memory 8208 .
- the functionalities described can be partially implemented via hardware logic in the microprocessor(s) 8203 and partially using the instructions stored in the memory 8208 .
- Some embodiments are implemented using the microprocessor(s) 8203 without additional instructions stored in the memory 8208 .
- Some embodiments are implemented using the instructions stored in the memory 8208 for execution by one or more general purpose microprocessor(s) 8203 .
- the disclosure is not limited to a specific configuration of hardware and/or software.
- FIG. 7 shows a block diagram of a computing device, according to one embodiment.
- the computing device of FIG. 7 is used to implement client device 137 .
- the computing device includes an inter-connect 9221 connecting the presentation device 9229 , user input device 9231 , a processor 9233 , a memory 9227 , a position identification unit 9225 and a communication device 9223 .
- the position identification unit 9225 is used to identify a geographic location.
- the position identification unit 9225 may include a satellite positioning system receiver, such as a Global Positioning System (GPS) receiver, to automatically identify the current position of the computing device.
- GPS Global Positioning System
- the communication device 9223 is configured to communicate with a server to provide data, including configuration data and/or an image from a camera of the computing device.
- the user input device 9231 is configured to receive or generate user data or content.
- the user input device 9231 may include a text input device, a still image camera, a video camera, and/or a sound recorder, etc.
- Prior provisioning approaches for a colocation environment and network are time-consuming and manually intensive.
- the provisioning needs can include a need to integrate internet connectivity as part of the colocation network.
- the foregoing situation for prior provisioning approaches creates a technical problem in which time and expense are increased when adding internet connectivity, and the chance for error in configuration is increased. This can negatively impact the reliability of the colocation network operation.
- a method includes receiving, from a client device (e.g., a customer that is installing and provisioning new equipment), a request to provide internet protocol (IP) services to at least one computing device mounted in one or more racks of a data center; assigning IP addresses corresponding to the IP services to be provided; creating a virtual network in a network fabric of the data center; in response to receiving the request, associating the virtual network with the assigned IP addresses; and configuring at least one top-of-rack (TOR) switch to connect at least one port of the TOR switch to the virtual network.
- IP internet protocol
- TOR top-of-rack
- a customer of a data center requests internet connectivity for its rack in the data center.
- the data center e.g., using a software configuration manager executing on a server, or a controller of a software-defined network
- the internet connectivity runs from a router of the data center to one or more switches (e.g., TOR switch) at the customer's rack.
- the internet connectivity is provided automatically in about 30 seconds after a request from the customer is received.
- the request is received via a portal from a client device of the customer.
- the data center provides an application programming interface that is used by the client device to communicate configuration data regarding the internet connectivity.
- the configuration data can be used to configure one or more TOR switches of the customer's rack(s).
- a customer of the data center specifies a virtual network (e.g., a virtual local area network (VLAN)) to use.
- Data center automation software configures a network fabric of the data center to use the specified virtual network (e.g., VLAN).
- the data center automation software provides the customer with the IP addresses to use for the internet connectivity (and also provides the netmask and gateway data used for configuring the customer's router).
- the internet connectivity is carved by the data center automation software out of the overall data center IP address space.
- VXLANs virtual extensible local area networks
- switches are used in conjunction with switches and the network fabric of the data center.
- a customer's existing VLANs are attached to a port of one of the switches in the customer's rack.
- the VLANs are converted into VXLANs.
- Data is sent to necessary destinations, then data is reconfigured back to customer-specified VLANs (this provides a tunneling mechanism in which the VLAN data is encapsulated inside of a VXLAN for transport).
- this tunneling mechanism can be used for thousands of networks.
- Logical services are delivered to the switches at the customer's rack as virtual networks using the VXLANs.
- FIG. 8 shows an example data center including a network fabric 801 that is configured to provide IP services to one or more racks in the data center, according to one embodiment.
- network fabric 801 is configured to connect IP services 843 to rack 803 and/or rack 855 .
- IP services 843 includes a router that connects the network fabric 801 to one or more telecommunications carriers and/or network providers.
- Computing device 125 of FIG. 1 is an example of computing device 825 .
- Network fabric 101 is an example of network fabric 801 .
- Racks 103 , 155 are an example of racks 803 , 855 .
- IP services 143 is an example of IP services 843 .
- Computing device 825 includes an internet configuration manager 829 that receives configuration data from client device 837 .
- Client device 837 communicates with computing device 825 using application programming interface 832 .
- Portal 833 connects computing device 825 to client device 837 using network 835 .
- network 835 includes a local area network, a wide area network, a wireless network, and/or the Internet.
- Network fabric 801 includes virtual networks 821 , 823 .
- virtual networks 821 and/or 823 are configured to connect IP services 843 to racks 803 , 855 .
- internet connectivity is provided to router 813 of rack 803 using TOR switch 805 .
- switch configuration manager 827 configures TOR switch 805 so that port 808 connects to one or more virtual networks 821 , 823 .
- one or more of virtual networks 821 , 823 are created in response to a request by client device 837 .
- this request to create one or more virtual networks is associated with the request for internet connectivity from client device 837 .
- switch configuration manager 827 alternatively and/or additionally configures TOR switch 857 so that port 861 connects to one or more virtual networks 821 , 823 .
- the configuration of TOR switch 857 is performed as part of responding to the request for internet connectivity received from the client device 837 described above.
- switch configuration manager 827 performs configuration of TOR switch 805 and/or 857 in response to a communication from internet configuration manager 829 after one or more virtual networks 821 , 823 have been created as described above.
- data regarding available IP addresses of the data center (e.g., that can be used for connecting to IP services 843 ) is stored in data store 831 .
- one or more IP addresses are allocated by internet configuration manager 829 for providing the requested internet connectivity.
- data store 831 stores records indicating allocated IP addresses associated with respective customers making requests for internet connectivity via their respective client devices. After internet connectivity is provided in response to a request, data store 831 is updated by internet configuration manager 829 to indicate the IP addresses newly-allocated for the internet connectivity.
- FIG. 9 shows a method for providing IP services to racks in a data center, according to one embodiment.
- the method of FIG. 9 can be implemented in the system of FIGS. 1 and 8 .
- processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
- hardware e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.
- software e.g., instructions run or executed on a processing device
- the method of FIG. 9 is performed at least in part by one or more processors of computing device 825 of FIG. 8 .
- computing device 825 is implemented using the processors and memory of FIG. 6 or 7 .
- a request is received from a client device to provide internet protocol (IP) services to at least one computing device mounted in one or more racks of the data center.
- IP internet protocol
- the request is received from client device 837 , and IP services 843 are provided to racks 803 , 855 by configuring virtual networks 821 , 823 .
- IP addresses are assigned that correspond to the IP services to be provided in response to the request.
- internet configuration manager 829 queries data store 831 to determine available IP addresses for associating to the internet connectivity.
- a virtual network is created in a network fabric of the data center.
- the virtual network is created in response to the request from the client device.
- the virtual network is created prior to receipt of the request from the client device.
- virtual network 821 is created in network fabric 801 .
- the virtual network is associated with the assigned IP addresses.
- virtual network 821 is configured using the assigned IP addresses.
- virtual network 821 is configured to connect a router (e.g., that connects to external network providers) associated with IP services 843 to port 808 of TOR switch 805 .
- one or more top-of-rack (TOR) switches are configured to connect one or more ports of each TOR switch to the created virtual network.
- the ports are connected to one or more additional virtual networks that existed prior to receiving the request from the client device.
- switch configuration manager 827 configures TOR switch 857 to connect port 861 to virtual network 821 .
- a method comprises: configuring a top-of-rack (TOR) switch (e.g., TOR switch 803 ) for connection to a router (e.g., router 813 ) mounted in a rack of a data center; receiving, from a client device (e.g., client device 837 ) that provides network configuration data for computing devices mounted in the rack, a request for internet protocol (IP) network connectivity; in response to receiving the request, providing the IP network connectivity to the router including creating a virtual network on a network fabric (e.g., network fabric 801 ) of the data center, and connecting the router to the virtual network; and delivering, via the router, IP services (e.g., IP services 843 ) using the internet protocol (IP) network connectivity to a computing device (e.g., server 809 ) mounted in the rack.
- IP internet protocol
- the network configuration data comprises configuration data for one or more virtual networks that connect, via the router, the computing devices to IP services provided by the data center.
- the virtual network is a virtual extensible local area network (VXLAN) of the network fabric.
- VXLAN virtual extensible local area network
- the router is a first router, providing the IP network connectivity further includes connecting the TOR switch to a second router (e.g., router 863 ) of the data center, and the second router provides IP network connectivity for a plurality of racks of the data center.
- a second router e.g., router 863
- the virtual network is specified by the client device, and providing the IP network connectivity further includes configuring the network fabric to use the specified virtual network.
- providing the IP network connectivity further includes providing IP addresses used to configure the router for providing the IP services.
- the client device is a first client device, and providing the IP addresses includes communicating the IP addresses to the first client device.
- the method further comprises: allocating a first IP address space corresponding to the request from the first client device; and allocating a second IP address space corresponding to a request for IP services received from a second client device.
- the method further comprises: allocating a subnet from an IP address space of the network fabric; and specifying a gateway for configuring the router, wherein the subnet routes to the virtual network.
- data regarding the subnet and gateway is communicated, via a portal, to the client device, and the IP services include at least one of providing a firewall or implementing a virtual private network (VPN).
- VPN virtual private network
- the network fabric is implemented using a software-defined network comprising a control layer overlayed onto an infrastructure layer, wherein the control layer manages network services including the IP services, and wherein the infrastructure layer comprises hardware or software switches, and hardware or software routers.
- a controller manages the control layer including creating the virtual network on the network fabric.
- the method further comprises receiving, from the client device, a policy, and implementing, by the controller, the policy in the control layer so that the IP services are in compliance with the policy.
- the method further comprises: maintaining, in memory of the data center (e.g., using records in data store 831 ), configuration data regarding an available IP address space of the data center for providing the IP network connectivity; wherein providing the IP network connectivity to the router further includes selecting a portion of the available IP address space.
- providing the IP network connectivity to the router further includes configuring the TOR switch to provide access for the computing devices to the IP services.
- the virtual network is a first virtual network
- the TOR switch is a first TOR switch (e.g., TOR switch 805 )
- the computing devices are first physical servers configured to run virtual servers including a first virtual server.
- the method further comprises: in response to the request from the client device, providing IP network connectivity, via a second TOR switch (e.g., TOR switch 857 ), to a second rack of the data center to provide access for second physical servers to IP services; and configuring a second virtual network of the network fabric to connect the first TOR switch to the second TOR switch.
- the second virtual network is configured to transmit data from the first virtual server to a second virtual server running on the second rack.
- the method further comprises communicating the network configuration data to a switch configuration manager of the data center for use in configuring the TOR switch.
- a method comprises: storing, in a data store (e.g., data store 831 ), configuration data regarding a plurality of computing devices that are provided internet protocol (IP) network connectivity by configuring a network fabric of a data center, wherein the configuration data includes available IP addresses of the data center; receiving, from a client device, a request for allocation of a portion of the IP addresses for one or more racks of the data center, wherein the IP connectivity is provided for use by at least one server mounted in the one or more racks; in response to receiving the request, providing the IP network connectivity in order to deliver IP services for the one or more racks, wherein providing the IP network connectivity includes configuring the network fabric using IP addresses assigned from the available IP addresses; configuring a first top-of-rack (TOR) switch of a first rack to connect the at least one server to the IP services; and updating the configuration data to indicate that the assigned IP addresses are associated with the one or more racks.
- IP internet protocol
- the method further comprises communicating, by an internet configuration manager (e.g., manager 829 ), the configuration data to a switch configuration manager for use in configuring the TOR switch.
- an internet configuration manager e.g., manager 829
- the first TOR switch and a second TOR switch of a second rack are each configured to provide the IP network connectivity using at least a portion of the assigned IP addresses.
- a system comprises: at least one processing device; and memory containing instructions configured to instruct the at least one processing device to: receive, from a client device, a request to provide internet protocol (IP) services to at least one computing device mounted in one or more racks of a data center; assign IP addresses corresponding to the IP services to be provided; create a virtual network in a network fabric of the data center; in response to receiving the request, associate the virtual network with the assigned IP addresses; and configure at least one top-of-rack (TOR) switch to connect at least one port of the TOR switch to the virtual network.
- IP internet protocol
- TOR top-of-rack
- rack 803 and rack 855 are each connected to network fabric 801 using a physical fiber port (e.g., physical fiber port 104 , 153 ).
- a customer that controls racks 803 and 855 requests the creation of one or more IP network connectivity instances, with each instance being associated with a respective virtual local area network (VLAN).
- VLAN virtual local area network
- Each VLAN will appear on the network equipment of the customer.
- Each VLAN is connected to the physical fiber port so that the VLAN can be used, for example, for Internet access.
- various virtual networks of network fabric 801 are configured to provide the Internet access. However, one or more of these virtual networks are hidden from the customer.
- Each VLAN connected to the physical fiber port is exposed to the customer.
- the operator of the data center obtains connectivity from one or more upstream connectivity providers.
- the operator runs the routing protocols and owns the corresponding IP address space.
- the customer makes a request for IP connectivity, and a size of a subnet of the IP address space allocated to the customer is determined based at least in part on the number of public IP addresses desired by the customer.
- the customer can also specify a rate limit (e.g., 1 Gb/sec) for the IP network connectivity.
- the subnet is allocated to the customer and routes to the customer's VLAN.
- a router or another network device in the rack of the customer terminates as part of the VLAN.
- the customer can route traffic using the data center and can use the public IP addresses.
- the public IP addresses are used for firewalls and/or load balancers.
- various virtual networks are connected to computing resources assigned for use by a customer.
- one or more networks can be connected to processing and/or storage resources.
- the virtual networks are connected to the physical fiber port of one or more racks of the customer.
- a data center is administered using computing device 825 .
- An administrator of computing device 825 can be provided visibility for all networks that have been created by customers and/or otherwise created on network fabric 801 .
- the administrator is provided visibility to the computing resources of the data center that are used to support the instance.
- routing instances are visible to the administrator.
- the administrator is provided visibility to all subnets that have been allocated to customers as IP network connectivity has been provided. The administrator can also see and identify those customers that have been assigned particular IP address space(s), which permits management of the capacity of the total public IP address space of the data center.
- equipment of the customer is mounted in rack 803 and rack 855 .
- client device 837 uses API 832 to program the network fabric 801 so that multiple virtual networks can be created.
- these virtual networks can be used to connect server 809 to server 867 .
- switch configuration manager 827 connects ports 808 and 861 to these virtual networks.
- racks 803 and 855 are each located in data centers at a different geographic location (e.g., the data centers are greater than 1,000 to 5,000 meters apart).
- virtual networks created for a customer using client device 837 can be associated with a particular group of networks.
- the associations of virtual networks to respective groups of networks can be stored in data store 831 .
- the customer has a server connected to various ports on multiple switches.
- the customer can use portal 833 to select one of the switches and to specify a virtual network to associate with a particular identified port of the selected switch.
- the customer can create a group of virtual networks and specify that one or more specified virtual networks are to be bound to the particular identified port.
- switch configuration manager 827 configures the selected switch so that the specified virtual network is bound to the particular identified port.
- racks in a data center are equipped with top-of-rack (TOR) switches and cabled into the data center network fabric (e.g., implemented using a software-defined network fabric) prior to a customer's arrival at the data center (e.g., arrival to install new servers or other equipment in a rack).
- the network fabric includes secure layer 2 network connectivity throughout one or more data centers (e.g., data centers in the same or different metro regions).
- the customer can specify a selection of networks to work with the servers.
- the switches and the network fabric are automatically configured by the data center to implement the customer selection.
- a customer installs its servers, and then cables the servers to the TOR switches (and/or to other switches or routers in the rack).
- the customer uses a portal to configure network ports for the switches.
- the customer has a server plugged into port 1 on two TOR switches for a rack.
- the customer uses the portal to select one of the switches, and then users a user interface of a client device to go to a screen for port 1 , at which the customer specifies that VLAN 100 is bound to that particular port 1 .
- the customer can also create a group of VLANs and specify that the group is bound to that port 1 .
- Data center automation software then automatically configures the TOR switches so that VLAN 100 is associated with the requested port 1 . In some cases, this process is applied across multiple racks simultaneously.
- a customer uploads a photo or other image data to the data center (e.g., using a customer portal) prior to arrival at the data center.
- the photo is used as part of a security process to control physical access by the customer to its racks in the data center.
- This process can also include configuring a locking mechanism (e.g., a lock on a door to a rack and/or to door to a room in which the rack is located) that allows customer access to its racks.
- Security personnel at the data center can provide the customer with a badge (that incorporates the photo or other image data). The badge enables the customer to enter the data center facility and unlock its racks.
- a customer is added to an authentication service used by the data center.
- the authentication service manages access by and identifies the customer.
- the customer logs into a command center of the data center (e.g., the command center can be implemented by software that includes the switch configuration manager 127 of FIG. 1 ).
- a client device e.g., client device 837 of FIG. 8
- the customer sends authentication credentials to the authentication service.
- physical access to one or more racks of the customer requires successful authentication by the authentication service.
- the command center can perform various actions.
- the command center maintains a database of available rack inventory.
- the command center allocates racks from available inventory in the location selected by the customer.
- the authentication service is updated with rack assignment information corresponding to these allocated racks.
- database records including the rack assignment information are accessed and used as a basis for configuring physical access by a customer.
- a security system at each physical data center facility where the selected racks are located is updated so that the customer is allowed to physically access the racks.
- one or more doors that permit entry into a physical facility and/or movement through doors inside the facility can be unlocked so that the customer is able to enter the data center and access its racks.
- a lock system used on the racks is configured to allow the customer to access the selected racks.
- the lock can be a physical-keyed lock, a magnetic lock, or a combination of physical and/or electronic locking mechanisms.
- IP connectivity e.g., to provide internet access
- IP connectivity is provided by IP services 843 of FIG. 8 .
- the customer can perform various further actions.
- the customer accesses the command center to complete user setup, including uploading the photo or image data via the portal (e.g., portal 833 of FIG. 8 ).
- the client device is a mobile device having a camera and is used to take a photo of personnel associated with the customer.
- the photo is uploaded to the command center via the API (e.g., API 832 ) above.
- the customer can check in with security personnel to receive a security badge or token.
- the badge can include the photo previously provided by the customer above.
- the customer enters the facility and unlocks the selected racks using the badge.
- the badge contains authentication credentials necessary to unlock the locking mechanism on the selected racks.
- the customer installs computing equipment in the selected racks, and then cables the equipment to the TOR switches above (e.g., TOR switches 805 , 857 ).
- the customer accesses the command center and configures ports of the TOR switches.
- the switch ports are configured with a virtual local area network (VLAN) configuration desired for use by the customer.
- VLAN virtual local area network
- FIG. 10 shows an example building 1002 that houses racks 1004 , 1006 of a data center and uses doors and locks to control access to the racks, according to one embodiment.
- Rack 1004 has a TOR switch 1016
- rack 1006 has a TOR switch 1018 .
- TOR switches 1016 and 1018 are configured, such as described above, to connect servers 1020 , 1022 to a network fabric of the data center.
- Lock 1012 physically secures door 1008 of rack 1004 .
- Lock 1014 physically secures door 1010 of rack 1006 .
- Lock 1012 and/or lock 1014 are released or unlocked in response to successful authentication of a customer.
- the customer authenticates itself using a security token or badge.
- the security badge is security badge 1028 which includes image 1030 .
- the customer authenticates itself using the client device that was used to provide configuration data for TOR switch 1016 and/or 1018 .
- a door 1024 controls interior access to building 1002 by persons on the exterior of building 1002 that desire entry.
- Lock 1026 physically locks door 1024 .
- security badge 1028 communicates with lock 1026 over a wireless link 1032 .
- Lock 1026 is unlocked in response to successful authentication of security badge 1028 by processing logic associated with lock 1026 , and/or a computing device associated with the data center.
- command center software communicates with lock 1012 , 1014 , and/or 1026 to provide physical access to one or more racks by a customer.
- switch configuration manager 127 of FIG. 1 monitors and/or receives data regarding the physical presence of customer personnel in a data center.
- switch configuration manager 127 receives data regarding the physical presence of personnel inside one or more identified racks (e.g., a hand inside a cage of a rack as determined by a camera of the rack).
- switch configuration manager 127 delays TOR switch configuration for a rack until data is received indicating that personnel are no longer physically present in the rack (e.g., image detection software determines that no movement has occurred in a predetermined time period).
- TOR switch configuration is postponed until data is received by switch configuration manager 127 that indicates that the rack is physically secure (e.g., physical access to the rack is closed off by one or more locks being engaged).
- TOR switch 1016 connects to network interface controller ports of server 1020 for downlink communications and to spine switches of the data center for uplink communications.
- an API is used to manage TOR switch 1016 .
- the API is accessed by server 1020 and/or a client device located externally to rack 1004 for performing network configuration associated with a rack being physically accessed.
- FIG. 11 shows a method for configuring a TOR switch to connect a server and virtual networks of a network fabric to one or more ports of the TOR switch, according to one embodiment.
- the method of FIG. 11 can be implemented in the system of FIGS. 1 and 8 for a data center located in building 1002 of FIG. 10 .
- processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
- hardware e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.
- software e.g., instructions run or executed on a processing device
- the method of FIG. 11 is performed at least in part by one or more processors of computing device 825 of FIG. 8 .
- computing device 825 is implemented using the processors and memory of FIG. 6 or 7 .
- a top-of-rack (TOR) switch is mounted in a rack of a data center.
- TOR switch 1016 is mounted in rack 1004 .
- the TOR switch is connected to a network fabric of the data center.
- the network fabric provides network connectivity between multiple data centers.
- TOR switch 1016 is connected to network fabric 101 of FIG. 1 .
- a request is received to configure the TOR switch for connecting a server to the network fabric.
- the request is received after the server has been physically mounted in the rack and physically cabled to the TOR switch.
- the request is received from client device 837 of a customer after the customer has physically mounted and cabled server 1020 to TOR switch 1016 .
- the TOR switch in response to receiving the request, is automatically configured to connect to the server and one or more virtual networks of the network fabric to one or more ports of the TOR switch.
- the TOR switch is configured by switch configuration manager 127 of FIG. 1 to connect the server and virtual networks 121 , 123 to the ports of the TOR switch.
- a method comprises: mounting a top-of-rack (TOR) switch (e.g., TOR switch 1016 ) in a rack (e.g., rack 1004 ) of a first data center (e.g., a data center enclosed by building 1002 ); connecting, using physical fiber (e.g., physical fiber port 104 of FIG. 1 ), the TOR switch to a network fabric (e.g., network fabric 101 of FIG.
- TOR top-of-rack
- the network fabric provides network connectivity between a plurality of data centers including the first data center; receiving, from a client device, a request to configure the TOR switch for connecting a server to the network fabric, wherein the request is received after the server has been physically mounted in the rack and physically cabled to the TOR switch; and in response to receiving the request, automatically configuring the TOR switch to connect each of the server and one or more virtual networks of the network fabric to one or more ports of the TOR switch.
- the network connectivity is layer 2 connectivity.
- the layer 2 connectivity is implemented between the data centers using a plurality of virtual extensible local area networks (VXLANs).
- VXLANs virtual extensible local area networks
- the request comprises a request to provide an internet connection for the server
- the method further comprises automatically configuring the TOR switch to provide internet connectivity to the server via one or more virtual networks of the network fabric.
- providing the internet connectivity comprises connecting the server to a carrier (e.g., carriers 145 of FIG. 1 ) or internet service provider via a router of the network fabric.
- a carrier e.g., carriers 145 of FIG. 1
- internet service provider via a router of the network fabric.
- the server is connected to a first port of the TOR switch, wherein an indication is received from the client device that specifies a first virtual network to be bound to the first port, and wherein configuring the TOR switch includes connecting the first port to the first virtual network.
- the method further comprises receiving, from the client device, a request to create a network group that includes a plurality of virtual networks including the first virtual network.
- the method further comprises: receiving, from the client device, a request to bind the network group to the first port; and in response to receiving the request to bind the network group, configuring the TOR switch to connect each of the plurality of virtual networks to the first port.
- the TOR switch is a first switch and the rack is a first rack
- the method further comprises, in response to receiving the request to bind the network group, automatically configuring a second TOR switch of a second rack to connect at least one of the plurality of virtual networks to a second port of the second TOR switch.
- FIG. 12 shows a method for controlling physical access to a rack in a data center, according to one embodiment.
- the method of FIG. 12 can be implemented in the system of FIGS. 1 and 8 (e.g., for a data center located in building 1002 of FIG. 10 ).
- processing logic can include hardware (e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.), software (e.g., instructions run or executed on a processing device), or a combination thereof.
- hardware e.g., processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuit, etc.
- software e.g., instructions run or executed on a processing device
- the method of FIG. 12 is performed at least in part by one or more processors of computing device 825 of FIG. 8 .
- computing device 825 is implemented using the processors and memory of FIG. 6 or 7 .
- a TOR switch is mounted in a rack.
- TOR switch 1018 is mounted in rack 1006 .
- the TOR switch is connected to a network fabric of the data center.
- TOR switch 1018 is connected to network fabric 101 .
- physical access to the rack is controlled.
- physical access to the rack is controlled using lock 1012 on door 1008 .
- a request to physically access the rack is received from a device.
- the request includes authentication credentials.
- the request to access the rack is received from security badge 1028 .
- the request is for access to the rack in building 1002 via entry by door 1024 .
- the request to access the rack is received from client device 137 or another computing device.
- the device is authenticated.
- authentication credentials provided by security badge 1028 are authenticated.
- a method comprises: mounting a top-of-rack (TOR) switch in a rack; connecting the TOR switch to a network fabric of a first data center; controlling, using a lock (e.g., lock 1026 ), physical access to the rack; receiving, from a computing device (e.g., a client device, a security token, etc.), a request to access the rack, wherein the request includes authentication credentials; in response to receiving the request to access the rack, authenticating the computing device; and in response to authenticating the computing device, configuring the lock to provide the physical access to the rack.
- a lock e.g., lock 1026
- connecting the TOR switch to the network fabric is performed prior to configuring the lock to provide the physical access to the rack.
- receiving the request to access the rack further includes receiving the authentication credentials from a security token or badge, and the method further comprises: in response to authenticating the computing device, releasing the lock.
- the method further comprises: receiving image data for an image of a person to be provided access to the rack; and providing, using the received image data, a display of the image on the security token or badge.
- image 1030 is displayed on security badge 1028 .
- the request to access the rack and the data regarding the image are each received from a client device over a portal.
- the method further comprises causing a display in a user interface of the client device, the display presenting available internet connectivity in each of a plurality of data centers including the first data center.
- configuring the lock to provide the physical access to the rack includes providing access for physical installation of at least one computing device that logically connects to a network port of the TOR switch.
- the method further comprises: in response to authenticating the computing device, unlocking a first door of a building (e.g., door 1024 of building 1002 ) that houses the first data center, wherein unlocking the first door permits physical entry by a person into the building; wherein the lock secures a second door (e.g., door 1008 ) of the rack, and configuring the lock to provide the physical access to the rack includes unlocking the second door.
- a building e.g., door 1024 of building 1002
- the lock secures a second door (e.g., door 1008 ) of the rack, and configuring the lock to provide the physical access to the rack includes unlocking the second door.
- a system comprises: at least one processing device; and memory containing instructions configured to instruct the at least one processing device to: mount a top-of-rack (TOR) switch in a rack of a data center; connect the TOR switch to a network fabric of the data center; receive, over a network, a request to configure the TOR switch for connecting a server to the network fabric; and in response to receiving the request, configure the TOR switch to connect the server to one or more ports of the TOR switch.
- TOR top-of-rack
- the instructions are further configured to instruct the at least one processing device to: after connecting the TOR switch to the network fabric, receive a request to access the rack; in response to receiving the request, authenticate the request; and in response to authenticating the request, configure a lock to provide physical access to the rack.
- the instructions are further configured to instruct the at least one processing device to: receive a request to provide an internet connection for the server (e.g., provide IP services 843 to server 1020 ); and in response to receiving the request to provide the internet connection, further configure the TOR switch to provide internet connectivity to the server via one or more virtual networks of the network fabric.
- a lock on the door to a data center building is integrated into an electronic badge reader system.
- the lock is programmed to respond to the reading of an electronic badge associated with the customer (e.g., the security badge is associated with the customer in a database record of the data center).
- the lock is programmed by data center software so that the customer can use the electronic badge to physically enter the data center and access the customer's rack (e.g., for installation and/or service of equipment).
- the disclosure includes various devices which perform the methods and implement the systems described above, including data processing systems which perform these methods, and computer readable media containing instructions which when executed on data processing systems cause the systems to perform these methods.
- Coupled to generally refers to a connection between components, which can be an indirect communicative connection or direct communicative connection (e.g., without intervening components), whether wired or wireless, including connections such as electrical, optical, magnetic, etc.
- various functions and operations may be described as being performed by or caused by software code to simplify description. However, those skilled in the art will recognize what is meant by such expressions is that the functions result from execution of the code by one or more processors, such as a microprocessor, Application-Specific Integrated Circuit (ASIC), graphics processor, and/or a Field-Programmable Gate Array (FPGA).
- processors such as a microprocessor, Application-Specific Integrated Circuit (ASIC), graphics processor, and/or a Field-Programmable Gate Array (FPGA).
- ASIC Application-Specific Integrated Circuit
- FPGA Field-Programmable Gate Array
- the functions and operations can be implemented using special purpose circuitry (e.g., logic circuitry), with or without software instructions.
- Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are not limited to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by a computing device.
- At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computing device or other system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
- a processor such as a microprocessor
- a memory such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
- Routines executed to implement the embodiments may be implemented as part of an operating system, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface).
- the computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
- a machine readable medium can be used to store software and data which when executed by a computing device causes the device to perform various methods.
- the executable software and data may be stored in various places including, for example, ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices.
- the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session.
- the data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time.
- Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, solid-state drive storage media, removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMs), Digital Versatile Disks (DVDs), etc.), among others.
- the computer-readable media may store the instructions.
- a tangible or non-transitory machine readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a machine (e.g., a computer, mobile device, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
- a machine e.g., a computer, mobile device, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.
- hardwired circuitry may be used in combination with software instructions to implement the techniques.
- the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by a computing device.
- computing devices include, but are not limited to, a server, a centralized computing platform, a system of multiple computing processors and/or components, a mobile device, a user terminal, a vehicle, a personal communications device, a wearable digital device, an electronic kiosk, a general purpose computer, an electronic document reader, a tablet, a laptop computer, a smartphone, a digital camera, a residential domestic appliance, a television, or a digital music player.
- Additional examples of computing devices include devices that are part of what is called “the internet of things” (IOT).
- IOT internet of things
- Such “things” may have occasional interactions with their owners or administrators, who may monitor the things or modify settings on these things. In some cases, such owners or administrators play the role of users with respect to the “thing” devices.
- the primary mobile device e.g., an Apple iPhone
- the primary mobile device of a user may be an administrator server with respect to a paired “thing” device that is worn by the user (e.g., an Apple watch).
- the computing device can be a host system, which is implemented, for example, as a desktop computer, laptop computer, network server, mobile device, or other computing device that includes a memory and a processing device.
- the host system can include or be coupled to a memory sub-system so that the host system can read data from or write data to the memory sub-system.
- the host system can be coupled to the memory sub-system via a physical host interface.
- Examples of a physical host interface include, but are not limited to, a serial advanced technology attachment (SATA) interface, a peripheral component interconnect express (PCIe) interface, universal serial bus (USB) interface, Fibre Channel, Serial Attached SCSI (SAS), a double data rate (DDR) memory bus, etc.
- the physical host interface can be used to transmit data between the host system and the memory sub-system.
- the host system can further utilize an NVM Express (NVMe) interface to access memory components of the memory sub-system when the memory sub-system is coupled with the host system by the PCIe interface.
- NVMe NVM Express
- the physical host interface can provide an interface for passing control, address, data, and other signals between the memory sub-system and the host system.
- the host system can access multiple memory sub-systems via a same communication connection, multiple separate communication connections, and/or a combination of communication connections.
- the host system includes a processing device and a controller.
- the processing device of the host system can be, for example, a microprocessor, a graphics processing unit, a central processing unit (CPU), an FPGA, a processing core of a processor, an execution unit, etc.
- the processing device can be a single package that combines an FPGA and a microprocessor, in which the microprocessor does most of the processing, but passes off certain predetermined, specific tasks to an FPGA block.
- the processing device is a soft microprocessor (also sometimes called softcore microprocessor or a soft processor), which is a microprocessor core implemented using logic synthesis.
- the soft microprocessor can be implemented via different semiconductor devices containing programmable logic (e.g., ASIC, FPGA, or CPLD).
- the controller is a memory controller, a memory management unit, and/or an initiator. In one example, the controller controls the communications over a bus coupled between the host system and the memory sub-system.
- the controller can send commands or requests to the memory sub-system for desired access to the memory components.
- the controller can further include interface circuitry to communicate with the memory sub-system.
- the interface circuitry can convert responses received from the memory sub-system into information for the host system.
- the controller of the host system can communicate with the controller of the memory sub-system to perform operations such as reading data, writing data, or erasing data at the memory components and other such operations.
- a controller can be integrated within the same package as the processing device. In other instances, the controller is separate from the package of the processing device.
- the controller and/or the processing device can include hardware such as one or more integrated circuits and/or discrete components, a buffer memory, a cache memory, or a combination thereof.
- the controller and/or the processing device can be a microcontroller, special purpose logic circuitry (e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc.), or another suitable processor.
- FPGA field programmable gate array
- ASIC application specific integrated circuit
- the memory components can include any combination of the different types of non-volatile memory components and/or volatile memory components.
- An example of non-volatile memory components includes a negative-and (NAND) type flash memory.
- Each of the memory components can include one or more arrays of memory cells such as single level cells (SLCs) or multi-level cells (MLCs) (e.g., triple level cells (TLCs) or quad-level cells (QLCs)).
- a particular memory component can include both an SLC portion and a MLC portion of memory cells.
- Each of the memory cells can store one or more bits of data (e.g., data blocks) used by the host system.
- non-volatile memory components such as NAND type flash memory are described, the memory components can be based on any other type of memory such as a volatile memory.
- the memory components can be, but are not limited to, random access memory (RAM), read-only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), phase change memory (PCM), magneto random access memory (MRAM), Spin Transfer Torque (STT)-MRAM, ferroelectric random-access memory (FeTRAM), ferroelectric RAM (FeRAM), conductive bridging RAM (CBRAM), resistive random access memory (RRAM), oxide based RRAM (OxRAM), negative-or (NOR) flash memory, electrically erasable programmable read-only memory (EEPROM), nanowire-based non-volatile memory, memory that incorporates memristor technology, and a cross-point array of non-volatile memory cells.
- RAM random access memory
- ROM read-only memory
- DRAM dynamic random access memory
- SDRAM synchronous dynamic random access memory
- PCM phase change memory
- MRAM magneto random access memory
- STT Spin Transfer Torque
- FeTRAM ferroelectric random-access memory
- a cross-point array of non-volatile memory can perform bit storage based on a change of bulk resistance, in conjunction with a stackable cross-gridded data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform a write in-place operation, where a non-volatile memory cell can be programmed without the non-volatile memory cell being previously erased. Furthermore, the memory cells of the memory components can be grouped as memory pages or data blocks that can refer to a unit of the memory component used to store data.
- the controller of the memory sub-system can communicate with the memory components to perform operations such as reading data, writing data, or erasing data at the memory components and other such operations (e.g., in response to commands scheduled on a command bus by a controller).
- a controller can include a processing device (processor) configured to execute instructions stored in local memory.
- the local memory of the controller can include an embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control operation of the memory sub-system, including handling communications between the memory sub-system and the host system.
- the local memory can include memory registers storing memory pointers, fetched data, etc.
- the local memory can also include read-only memory (ROM) for storing micro-code.
- a memory sub-system may not include a controller, and can instead rely upon external control (e.g., provided by an external host, or by a processor or controller separate from the memory sub-system).
- external control e.g., provided by an external host, or by a processor or controller separate from the memory sub-system.
- the controller can receive commands or operations from the host system and can convert the commands or operations into instructions or appropriate commands to achieve the desired access to the memory components.
- the controller can be responsible for other operations such as wear leveling operations, garbage collection operations, error detection and error-correcting code (ECC) operations, encryption operations, caching operations, and address translations between a logical block address and a physical block address that are associated with the memory components.
- ECC error detection and error-correcting code
- the controller can further include host interface circuitry to communicate with the host system via the physical host interface.
- the host interface circuitry can convert the commands received from the host system into command instructions to access the memory components as well as convert responses associated with the memory components into information for the host system.
- the memory sub-system can also include additional circuitry or components that are not illustrated.
- the memory sub-system can include a cache or buffer (e.g., DRAM or SRAM) and address circuitry (e.g., a row decoder and a column decoder) that can receive an address from the controller and decode the address to access the memory components.
- a cache or buffer e.g., DRAM or SRAM
- address circuitry e.g., a row decoder and a column decoder
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Power Engineering (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
-
- Provisioning hyper-converged infrastructure in a shorter period of time.
- Extending colocation environments and connectivity within or across data centers in different geographic locations.
- Retaining full control by the customer over its network and compute environment, with dedicated hardware.
- Reducing complexity by delivering multiple services over a single physical network connection.
Claims (13)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/695,696 US11496365B2 (en) | 2019-06-17 | 2019-11-26 | Automated access to racks in a colocation data center |
US17/962,780 US11838182B2 (en) | 2019-06-17 | 2022-10-10 | Automated access to racks in a colocation data center |
US18/525,373 US20240106711A1 (en) | 2019-06-17 | 2023-11-30 | Automated access to racks in a colocation data center |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/442,997 US11374879B2 (en) | 2019-06-17 | 2019-06-17 | Network configuration of top-of-rack switches across multiple racks in a data center |
US16/695,696 US11496365B2 (en) | 2019-06-17 | 2019-11-26 | Automated access to racks in a colocation data center |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/442,997 Continuation US11374879B2 (en) | 2019-06-17 | 2019-06-17 | Network configuration of top-of-rack switches across multiple racks in a data center |
US16/442,997 Continuation-In-Part US11374879B2 (en) | 2019-06-17 | 2019-06-17 | Network configuration of top-of-rack switches across multiple racks in a data center |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/962,780 Continuation US11838182B2 (en) | 2019-06-17 | 2022-10-10 | Automated access to racks in a colocation data center |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200396127A1 US20200396127A1 (en) | 2020-12-17 |
US11496365B2 true US11496365B2 (en) | 2022-11-08 |
Family
ID=73746134
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/695,696 Active 2039-07-08 US11496365B2 (en) | 2019-06-17 | 2019-11-26 | Automated access to racks in a colocation data center |
US17/962,780 Active US11838182B2 (en) | 2019-06-17 | 2022-10-10 | Automated access to racks in a colocation data center |
US18/525,373 Pending US20240106711A1 (en) | 2019-06-17 | 2023-11-30 | Automated access to racks in a colocation data center |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/962,780 Active US11838182B2 (en) | 2019-06-17 | 2022-10-10 | Automated access to racks in a colocation data center |
US18/525,373 Pending US20240106711A1 (en) | 2019-06-17 | 2023-11-30 | Automated access to racks in a colocation data center |
Country Status (1)
Country | Link |
---|---|
US (3) | US11496365B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11838182B2 (en) | 2019-06-17 | 2023-12-05 | Cyxtera Data Centers, Inc. | Automated access to racks in a colocation data center |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210314385A1 (en) * | 2020-04-07 | 2021-10-07 | Cisco Technology, Inc. | Integration of hyper converged infrastructure management with a software defined network control |
US11038752B1 (en) * | 2020-06-16 | 2021-06-15 | Hewlett Packard Enterprise Development Lp | Creating a highly-available private cloud gateway based on a two-node hyperconverged infrastructure cluster with a self-hosted hypervisor management system |
JP2023061144A (en) * | 2021-10-19 | 2023-05-01 | 横河電機株式会社 | Control system, control method, and program |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080174954A1 (en) * | 2007-01-24 | 2008-07-24 | Vangilder James W | System and method for evaluating equipment rack cooling performance |
US20090055897A1 (en) * | 2007-08-21 | 2009-02-26 | American Power Conversion Corporation | System and method for enforcing network device provisioning policy |
US7937470B2 (en) * | 2000-12-21 | 2011-05-03 | Oracle International Corp. | Methods of determining communications protocol latency |
US20120084389A1 (en) * | 2010-09-30 | 2012-04-05 | Fujitsu Limited | Technique for providing service through data center |
US20120297037A1 (en) * | 2011-05-16 | 2012-11-22 | Hitachi, Ltd. | Computer system for allocating ip address to communication apparatus in computer subsystem newly added and method for newly adding computer subsystem to computer system |
US20120303767A1 (en) * | 2011-05-24 | 2012-11-29 | Aleksandr Renzin | Automated configuration of new racks and other computing assets in a data center |
US20130054426A1 (en) | 2008-05-20 | 2013-02-28 | Verizon Patent And Licensing Inc. | System and Method for Customer Provisioning in a Utility Computing Platform |
US8458329B2 (en) | 2010-06-30 | 2013-06-04 | Vmware, Inc. | Data center inventory management using smart racks |
US8484355B1 (en) | 2008-05-20 | 2013-07-09 | Verizon Patent And Licensing Inc. | System and method for customer provisioning in a utility computing platform |
US8537536B1 (en) * | 2011-12-16 | 2013-09-17 | Paul F. Rembach | Rapid deployment mobile data center |
US20150009831A1 (en) | 2013-07-05 | 2015-01-08 | Red Hat, Inc. | Wild card flows for switches and virtual switches based on hints from hypervisors |
US20150019733A1 (en) * | 2013-06-26 | 2015-01-15 | Amazon Technologies, Inc. | Management of computing sessions |
US20150100560A1 (en) * | 2013-10-04 | 2015-04-09 | Nicira, Inc. | Network Controller for Managing Software and Hardware Forwarding Elements |
US20160013974A1 (en) * | 2014-07-11 | 2016-01-14 | Vmware, Inc. | Methods and apparatus for rack deployments for virtual computing environments |
US9294349B2 (en) | 2013-10-15 | 2016-03-22 | Cisco Technology, Inc. | Host traffic driven network orchestration within data center fabric |
US20160087859A1 (en) | 2014-09-23 | 2016-03-24 | Chia-Chee Kuan | Monitor a data center infrastructure |
US20160163177A1 (en) * | 2007-10-24 | 2016-06-09 | Michael Edward Klicpera | Water Use/Water Energy Use Monitor and/or Leak Detection System |
US20170039836A1 (en) * | 2015-08-04 | 2017-02-09 | Solar Turbines Incorporated | Monitoring System for a Gas Turbine Engine |
US20170149931A1 (en) | 2015-11-24 | 2017-05-25 | Vmware, Inc. | Methods and apparatus to manage workload domains in virtual server racks |
US20180367607A1 (en) | 2017-06-14 | 2018-12-20 | Vmware, Inc. | Top-of-rack switch replacement for hyper-converged infrastructure computing environments |
US20190028342A1 (en) | 2017-07-20 | 2019-01-24 | Vmware Inc. | Methods and apparatus to configure switches of a virtual rack |
US20190188022A1 (en) * | 2017-12-20 | 2019-06-20 | At&T Intellectual Property I, L.P. | Virtual Redundancy for Active-Standby Cloud Applications |
US20200305301A1 (en) * | 2019-03-22 | 2020-09-24 | Aic Inc. | Method for remotely clearing abnormal status of racks applied in data center |
US20200328914A1 (en) * | 2016-04-29 | 2020-10-15 | New H3C Technologies Co., Ltd. | Packet transmission |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201227229A (en) * | 2010-12-31 | 2012-07-01 | Hon Hai Prec Ind Co Ltd | Container data center |
US11521139B2 (en) * | 2012-09-24 | 2022-12-06 | Amazon Technologies, Inc. | Providing system resources with secure containment units |
US9319392B1 (en) * | 2013-09-27 | 2016-04-19 | Amazon Technologies, Inc. | Credential management |
US10257268B2 (en) * | 2015-03-09 | 2019-04-09 | Vapor IO Inc. | Distributed peer-to-peer data center management |
US9942935B2 (en) * | 2015-11-17 | 2018-04-10 | Dell Products, Lp | System and method for providing a wireless failover of a management connection in a server rack of a data center |
US10241555B2 (en) * | 2015-12-04 | 2019-03-26 | Dell Products, Lp | System and method for monitoring a battery status in a server in a data center |
US10298460B2 (en) * | 2015-12-21 | 2019-05-21 | Dell Products, Lp | System and method for aggregating communication and control of wireless end-points in a data center |
US11102063B2 (en) * | 2017-07-20 | 2021-08-24 | Vmware, Inc. | Methods and apparatus to cross configure network resources of software defined data centers |
US20190069436A1 (en) * | 2017-08-23 | 2019-02-28 | Hewlett Packard Enterprise Development Lp | Locking mechanism of a module of a data center |
US11070392B2 (en) * | 2017-10-27 | 2021-07-20 | Hilton International Holding Llc | System and method for provisioning internet access |
US20190164165A1 (en) * | 2017-11-28 | 2019-05-30 | Ca, Inc. | Cross-device, multi-factor authentication for interactive kiosks |
US11496365B2 (en) | 2019-06-17 | 2022-11-08 | Cyxtera Data Centers, Inc. | Automated access to racks in a colocation data center |
-
2019
- 2019-11-26 US US16/695,696 patent/US11496365B2/en active Active
-
2022
- 2022-10-10 US US17/962,780 patent/US11838182B2/en active Active
-
2023
- 2023-11-30 US US18/525,373 patent/US20240106711A1/en active Pending
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7937470B2 (en) * | 2000-12-21 | 2011-05-03 | Oracle International Corp. | Methods of determining communications protocol latency |
US20080174954A1 (en) * | 2007-01-24 | 2008-07-24 | Vangilder James W | System and method for evaluating equipment rack cooling performance |
US20090055897A1 (en) * | 2007-08-21 | 2009-02-26 | American Power Conversion Corporation | System and method for enforcing network device provisioning policy |
US20160163177A1 (en) * | 2007-10-24 | 2016-06-09 | Michael Edward Klicpera | Water Use/Water Energy Use Monitor and/or Leak Detection System |
US20130054426A1 (en) | 2008-05-20 | 2013-02-28 | Verizon Patent And Licensing Inc. | System and Method for Customer Provisioning in a Utility Computing Platform |
US8484355B1 (en) | 2008-05-20 | 2013-07-09 | Verizon Patent And Licensing Inc. | System and method for customer provisioning in a utility computing platform |
US8458329B2 (en) | 2010-06-30 | 2013-06-04 | Vmware, Inc. | Data center inventory management using smart racks |
US20120084389A1 (en) * | 2010-09-30 | 2012-04-05 | Fujitsu Limited | Technique for providing service through data center |
US20120297037A1 (en) * | 2011-05-16 | 2012-11-22 | Hitachi, Ltd. | Computer system for allocating ip address to communication apparatus in computer subsystem newly added and method for newly adding computer subsystem to computer system |
US20120303767A1 (en) * | 2011-05-24 | 2012-11-29 | Aleksandr Renzin | Automated configuration of new racks and other computing assets in a data center |
US20140304336A1 (en) * | 2011-05-24 | 2014-10-09 | Facebook, Inc. | Automated Configuration of New Racks and Other Computing Assets in a Data Center |
US8537536B1 (en) * | 2011-12-16 | 2013-09-17 | Paul F. Rembach | Rapid deployment mobile data center |
US20150019733A1 (en) * | 2013-06-26 | 2015-01-15 | Amazon Technologies, Inc. | Management of computing sessions |
US20150009831A1 (en) | 2013-07-05 | 2015-01-08 | Red Hat, Inc. | Wild card flows for switches and virtual switches based on hints from hypervisors |
US20150100560A1 (en) * | 2013-10-04 | 2015-04-09 | Nicira, Inc. | Network Controller for Managing Software and Hardware Forwarding Elements |
US9294349B2 (en) | 2013-10-15 | 2016-03-22 | Cisco Technology, Inc. | Host traffic driven network orchestration within data center fabric |
US20160013974A1 (en) * | 2014-07-11 | 2016-01-14 | Vmware, Inc. | Methods and apparatus for rack deployments for virtual computing environments |
US20160087859A1 (en) | 2014-09-23 | 2016-03-24 | Chia-Chee Kuan | Monitor a data center infrastructure |
US20170039836A1 (en) * | 2015-08-04 | 2017-02-09 | Solar Turbines Incorporated | Monitoring System for a Gas Turbine Engine |
US20170149931A1 (en) | 2015-11-24 | 2017-05-25 | Vmware, Inc. | Methods and apparatus to manage workload domains in virtual server racks |
US20200328914A1 (en) * | 2016-04-29 | 2020-10-15 | New H3C Technologies Co., Ltd. | Packet transmission |
US20180367607A1 (en) | 2017-06-14 | 2018-12-20 | Vmware, Inc. | Top-of-rack switch replacement for hyper-converged infrastructure computing environments |
US20190028342A1 (en) | 2017-07-20 | 2019-01-24 | Vmware Inc. | Methods and apparatus to configure switches of a virtual rack |
US20190188022A1 (en) * | 2017-12-20 | 2019-06-20 | At&T Intellectual Property I, L.P. | Virtual Redundancy for Active-Standby Cloud Applications |
US20200305301A1 (en) * | 2019-03-22 | 2020-09-24 | Aic Inc. | Method for remotely clearing abnormal status of racks applied in data center |
Non-Patent Citations (1)
Title |
---|
Perry, Christian "Cyxtera: Slow provisioning derails IT transformation efforts," 451 Research, Voice of the Enterprise: Servers and Converged Infrastructure, Budgets and Outlook, 2017, Published Apr. 6, 2018 (8 pages). |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11838182B2 (en) | 2019-06-17 | 2023-12-05 | Cyxtera Data Centers, Inc. | Automated access to racks in a colocation data center |
Also Published As
Publication number | Publication date |
---|---|
US20200396127A1 (en) | 2020-12-17 |
US20230033884A1 (en) | 2023-02-02 |
US11838182B2 (en) | 2023-12-05 |
US20240106711A1 (en) | 2024-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11431654B2 (en) | Network service integration into a network fabric of a data center | |
US11374880B2 (en) | Automated deployment of internet connectivity to rack switches in a data center | |
US11838182B2 (en) | Automated access to racks in a colocation data center | |
US11374879B2 (en) | Network configuration of top-of-rack switches across multiple racks in a data center | |
US11218364B2 (en) | Network-accessible computing service for micro virtual machines | |
US10999406B2 (en) | Attaching service level agreements to application containers and enabling service assurance | |
JP6670025B2 (en) | Multi-tenant-aware Dynamic Host Configuration Protocol (DHCP) mechanism for cloud networking | |
US10996972B2 (en) | Multi-tenant support on virtual machines in cloud computing networks | |
US10833949B2 (en) | Extension resource groups of provider network services | |
US9634948B2 (en) | Management of addresses in virtual machines | |
CN110073355A (en) | Secure execution environments on server | |
US12106132B2 (en) | Provider network service extensions | |
US11082485B2 (en) | Swapping non-virtualizing and self-virtualizing devices | |
US9417997B1 (en) | Automated policy based scheduling and placement of storage resources | |
US20240098089A1 (en) | Metadata customization for virtual private label clouds | |
US20230269114A1 (en) | Automated cloud on-ramp in a data center | |
US11363113B1 (en) | Dynamic micro-region formation for service provider network independent edge locations | |
US11575614B2 (en) | Managing input/output priority based on response time | |
US10783465B1 (en) | Dynamic port bandwidth for dedicated physical connections to a provider network | |
Thielemans et al. | Experiences with on-premise open source cloud infrastructure with network performance validation | |
US11943230B2 (en) | System and method for dynamic orchestration of virtual gateways based on user role | |
US11082496B1 (en) | Adaptive network provisioning | |
US20240187410A1 (en) | Preventing masquerading service attacks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: CYXTERA DATA CENTERS, INC., FLORIDA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LOCHHEAD, JASON ANTHONY;SUBRAMANIAN, MANIKANDAN;HESS, DAVID;AND OTHERS;REEL/FRAME:051195/0793 Effective date: 20191205 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED Free format text: AWAITING TC RESP, ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: CITIBANK, N.A., AS COLLATERAL AGENT, NEW YORK Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:CYXTERA DATA CENTERS, INC.;REEL/FRAME:063117/0893 Effective date: 20230314 |
|
AS | Assignment |
Owner name: PHOENIX INFRASTRUCTURE LLC, NEW YORK Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CYXTERA DATA CENTERS, INC.;REEL/FRAME:066216/0286 Effective date: 20240112 |