PowerEdge VRTX

Dell PowerEdge VRTX is a computer hardware product line from Dell.[1] It is a mini-blade chassis with built-in storage system. The VRTX comes in two models: a 19" rack version that is 5 rack units high or as a stand-alone tower system.[2]

Specifications

The VRTX system is partially based on the Dell M1000e blade-enclosure and shares some technologies and components. There are also some differences with that system. The M1000e can support an EqualLogic storage area network that connects the servers to the storage via iSCSI, while the VRTX uses a shared PowerEdge RAID Controller (6Gbit PERC8). A second difference is the option to add certain PCIe cards (Gen2 support) and assign them to any of the four servers.[1][2]

Servers: The VRTX chassis has 4 half-height slots available for Ivy-Bridge based PowerEdge blade servers. At launch the PE-M520 (Xeon E5-2400v2) and the PE-M620 (Xeon E5-2600v2) where the only two supported server blades, however the M520 was since discontinued. The same blades are used in the M1000e but for use in the VRTX they need to run specific configuration, using two PCIe 2.0 mezzanine cards per server. A conversion kit is available from Dell to allow moving a blade from a M1000e to VRTX chassis. Storage: The VRTX chassis includes shared storage slots that connect to a single or dual PERC 8 controller(s) via switched 6Gbit SAS. This controller which is managed through the CMC allows RAID groups to be configured and then allows for those RAID groups to be subdivided into individual virtual disks that can be presented out to either single or multiple blades. The shared storage slots are either 12 x 3,5" HDD slots or 25 x 2,5" HDD slots depending on the VRTX chassis purchased. Dell offers 12Gbit SAS disks for the VRTX, but these will operate at the slower 6Gbit rate for compatibility with the older PERC8 and SAS switches.

Networking: The VRTX chassis has a built in IOM for supporting ethernet traffic to the server blades. At present the options for this IOM are an 8 port 1Gb pass-through module or a 24 Port 1Gb switch. The 8 port pass through module offers 2 pass-through connections to each internal blade slot while the 24 port 1Gb switch option provides 16 internal ports (4 per blade slot) and 8 external ports to be used to uplink to the network. The I/O modules used on the VRTX are a different size to the I/O modules of the M1000e, so I/O modules are not compatible between the systems. A 10Gb I/O module is planned for future release.

Management: A CMC is responsible for the management of the entire system. The CMC is similar to the CMC used in the M1000e chassis. Connection to the CMCs is done via separate RJ45 ethernet connectors.

Power and cooling: The system comes with four PSUs at 110 or 230V AC. There is no option to use -48V DC PSUs. Each 1100 Watt PSU has a built-in fan. For cooling of the server-modules there are four blower-modules, each containing two fans, and for cooling of the rest of the chassis there are 6 internal fans which can only be reached by opening the chassis. The fans used are the same units as used in the PowerEdge R-720xd rack-server.

KVM: Unlike the M1000e the VRTX doesn't have a separate KVM module, but it is built into the main chassis. The system only supports USB keyboard and mouse. Control of the KVM function is done via the mini LCD screen. These USB ports as well as the 15 pin VGA connector are at the front of the system.

USB: The USB connectors are only for connecting keyboard and mouse; it doesn't support external storage via USB.

LCD: Via the mini-LCD screen at the front of the system you can find status information of the system, configure some basic settings (such as CMC IP address) and manage the built-in KVM switch. The LCD screen functions can be controlled via a 5 button navigation system, similar to the system used on the M1000e.

Serial: A single RS-232 serial communication port is provided at the back of the system. This connector is only used for local configuration to the CMC: it doesn't allow you to use this connector as serial port of a server in the system.

Expansion slots: The system provides space for five PCIe Gen2 (2.0) expansion cards and 3 'full height' PCIe Gen2 (2.0) expansion cards. Via the management controller you can assign each slot to a specific server. You can only assign a PCIe slot to a server when the server is powered off as the PCIe card is recognized and initialized by the server BIOS at startup. Only certain PCIe cards are supported by Dell. Currently, VRTX support is limited to eight different PCIe cards, including six Ethernet NICs (Intel and Broadcom), a 6Git SAS adapter (LSI) and the AMD FirePro™ W7000.[3]

Usage

The VRTX is targeted at two different user-groups: either local offices of large enterprises where the majority of the IT services are centrally provided via remote datacenters - where the VRTX system provides local functions such as a relative small virtualisation platform to provide locally needed services as VDI workstations, local Exchange or Lync server and local storage facilities. The entire system will normally be managed by the central IT department via the CMC (for the chassis) and some IT manager as SCCM or KACE for the servers running on the system.[4]

The other intended public is the SME market with limited IT requirements. The tower model is designed to run in a normal office environment. Dell claims that the noise level of the VRTX system is very low[1] and can be installed in a normal office environment: there is no need to install the system in a special server-room. It is possible however to convert a tower-VRTX into a rack-mounted VRTX.

Operating systems

The server blades in a VRTX system (M520/M620/M820) have a different list of supported operating systems than their M1000e counterparts. The operating systems supported to run on the blades are: Windows Server 2008 SP2, Windows Server 2008 R2 SP1, Windows Server 2012, Windows Server 2012 R2, VMware ESX 5.1, and VMware ESX 5.5. Support for running other operating systems is only supported as virtual machines on Hyper-V or ESXi. The main intended and marketed use will be as a system running Hyper-V or ESXi[5]

The supported Hypervisors on the VRTX Chassis are Windows Hyper-V, VMware ESXi 5.1, and VMware ESXi 5.5. At this time other hypervisors, like Citrix Xenserver, are not supported.

At launch no Linux based operating systems were supported, mainly because there was no Linux driver available for the MegaRAID controller for shared storage access.[6] As of June 2014, Linux support for the VRTX Shared PERC 8 was released. This driver supports the single-controller Entry Shared Mode (ESM) configuration, support for the dual-controller High Availability Shared PERC configuration has not been announced. [7]

Announced features

Although not available at launch in June 2013 Dell has hinted about future improvements and functionality.

These include:

- (Available as of April 2014) Support for dual PERC (=PowerEdge Raid Controller) for redundancy to internal shared storage slots
- Support for 10Gb ethernet switch and pass-through modules
- Support for additional operating systems, mainly Linux based.

See also

Dell PowerEdge - main article on Dell PowerEdge server family
PowerEdge Generation 12 servers
Power Edge M520 on the M1000e page
Power Edge M620 on the M1000e page
CMC - on the M1000e page

References

  1. 1 2 3 Chris Preimesberger (7 June 2013). "Why Dell may have hit home with new VRTX server". eWeek. Retrieved 30 September 2013.
  2. 1 2 "Dell PowerEdge VRTX" (PDF). Specification Sheet. Dell. July 8, 2013. Retrieved 30 September 2013.
  3. "PowerEdge VRTX Technical Guide" (PDF). Dell.com. Dell. Retrieved 5 January 2016.
  4. Chris Cowley's BLOG: Dell announces VRTX, 4 June 2013. Visited: 30 June 2013
  5. Dell reference architectures with 3 typical setup's of VRTX: Exchange 2013 on Hyper-v, Hyper-V 2012 server and VRTX as VMware ESXi cluster, visited: 4 August 2013
  6. ServerWatch.com: Dell debuts VRTX for converged infrastructure, section 'Windows Only'. 5 June 2013. Visited: 30 June 2013
  7. Dell.com: VRTX Shared PERC 8 driver for Linux, 17 Jun 2014. Visited: 15 Jan 2015
This article is issued from Wikipedia - version of the Tuesday, January 05, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.