Enabling Jumbo frames in VMware environments

An essential feature that we should always leave configured when working with gigabit networks is to modify the MTU value (Maximum Transfer Unit – Maximum Transfer Unit) What is the size (in bytes) of the largest data unit you can send with IP, by default, LAN networks use an MTU of 1500 bytes. On VMware and all devices that make up the gigabit ethernet network (typically the iSCSI storage network) its value must be raised to 9000 bytes, We should enable it in the storage cabin, on the switch (Some switches have it enabled by default), on VMware ESX hosts / VMware ESXi (vSwitch & Port Group) and at the NIC level on equipment that is directly connected. All this in order to take advantage of the gigabit network and be able to send larger packets.

Setting up an HP Lefthand array

In this document you will see certain generic configurations that allow these HP SAN arrays called HP Lefthand, this case is carried out by means of virtual arrays under a VMware environment, as they allow you to work perfectly in a much more flexible laboratory environment. HP has several models of physical Lefthand arrays, all with the same system, but with different capacities, Disc Models, Ethernet Mouths… would be the HP LeftHand P4500 and HP LeftHand P4300 series. But also for production environments there is the HP LeftHand P4000 Virtual SAN Appliance or VSA. In this document we will see the main characteristics of the cabins, such as storage clustering (gives greater performance and capacity), Network RAID (Increased data availability), Thin provisioning (Reduces costs and improves disk capacity utilization), iSCSI (Ethernet network technology) Snapshots and Replication using Remote Copy (for local replication […]

Creating an iSCSI Target on Windows Unified Data Storage Server

In this document, we'll see how we can create an iSCSI Target with our Windows operating system, No third-party apps, we can create or assign virtual disks to the iSCSI Target to later use them in shared storage, To set up a cluster… What we will need is a compatible Operating System, or a Windows Unified Data Storage Server, It's an OS. OEM, This is, that are already pre-installed with the equipment when we purchase it from our manufacturer. If we have a Windows Server 2003 o Windows Server 2008, we should update it to these editions, I will comment on this in another document, to be able to have OEM operating systems on virtual machines (For example).

Connecting an iSCSI-enabled Openfiler NAS to VMware ESX

In this document, we'll look at how to connect an ESX server to a shared storage system, for this we will use an Openfiler NAS, using iSCSI. To configure this document, it is assumed that we already have an Openfiler server installed and with a volume created of the iSCSI type to be able to use HA or DRS with VMotion. And of course a virtual network connection that allows VMotion.

Extending Storage on VMware ESX – Extending a LUN and having VMware ESX extend its VMFS partition

This document shows how to extend the VMware ESX partition. If for whatever reason we want to expand a LUN and this LUN is the shared storage of our VMware ESX servers, we must extend the VMFS partition by following these steps. In any case, it is not advisable to do so, it is always preferable to create a new LUN with the available free size. But if necessary, here we have how to do it. Even so, for it to extend correctly, it is advisable to stop the virtual machines that are running on this LUN (By experience).

Using StarWind to emulate iSCSI/NAS/SAN arrays

With this procedure we are going to explain how a NAS/SAN or iSCSI device works, Everything is simple, with software, we will use it to perform CLUSTER procedures. The StarWind is a software capable of emulating a disk array, be a NAS, or a SAN or the cheapest an iSCSI. In this procedure we will create a virtual array of iSCSI disks with StarWind (will be the iSCSI target) and with the iSCSI Initiator we will connect to it from the servers that we want the disks to have connected, for later create a cluster. This diagram comments on the situation:

Configuring iSCSI on Microsoft Windows

iSCSI is something like :P, is having a disk server (SCSI disks, SATA or IDE with the types of RAID we want to mount), That disk server connects to the network and tricks our other Windows server into accessing those disks as if they were physically connected, Here's a graph of the example you'll see in the document: