link: Mananging Consultant

Cari Blog Ini

Rabu, 11 Mei 2011

VIO Server Virtual I/O Server

VIO Server Installation & Configuration

IBM Virtual I/O Server The Virtual I/O Server is part of the IBM eServer p5 Advanced Power Virtualization hardware feature. Virtual I/O Server allows sharing of physical resources between LPARs including virtual SCSI and virtual networking. This allows more efficient utilization of physical resources through sharing between LPARs and facilitates server consolidation.

Installation You have two options to install the AIX-based VIO Server:
1. Install from CD
2. Install from network via an AIX NIM-Server

Installation method
#1 is probably the more frequently used method in a pure Linux environment as installation method #2 requires the presence of an AIX NIM (Network Installation Management) server. Both methods differ only in the initial boot step and are then the same. They both lead to the following installation screen:

IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM STARTING SOFTWARE IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM PLEASE WAIT... IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMIBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBM IBMElapsed time since release of system processors: 51910 mins 20 secs

------------------------------------------------------------------------------- Welcome to the Virtual I/O Server. boot image timestamp: 10:22 03/23 The current time and date: 17:23:47 08/10/2005 number of processors: 1 size of memory: 2048MB boot device: /pci@800000020000002/pci@2,3/ide@1/disk@0:\ppc\chrp\bootfile.exeSPLPAR info: entitled_capacity: 50 platcpus_active: 2This system is SMT enabled: smt_status: 00000007; smt_threads: 2 kernel size: 10481246; 32 bit kernel
-------------------------------------------------------------------------------




The next step then is to define the system console. After some time you should see the following screen:


******* Please define the System Console. *******Type a 1 and press Enter to use this terminal as the system console.


Then Choose language of installation


>>> 1 Type 1 and press Enter to have English during install.


This is the main installation menu of the AIX-based VIO-Server:



Welcome to Base Operating System
Installation and Maintenance
Type the number of your choice and press Enter. Choice is indicated by >>>.>>>

1 Start Install Now with Default Settings
2 Change/Show Installation Settings and Install
3 Start Maintenance Mode for System Recovery

88 Help ? 99 Previous Menu

>>> Choice [1]:


Select Hard disk where you need to install VIO base operating system as we do in AIX Base operating system.


Once the installation is over. You will get login Prompt similar to AIX server.

VIO server is nothing but AIX on top of that Virtualisation software loaded on it. Generally on VIO server we do not host any application. Its basically used for sharing I/O resources ( DISK & Network ) to the client LPAR hosted in same Physical server.


Initial setup
After the reboot you are presented with the VIO-Server login prompt. You can't login as user root as you have to use the special user id padmin. No initial default password is set. Immediately after login you are forced to set a new password.


Before you can do anything you have to accept the I/O Server license.
This is done with the license command

#license -accept

Once you are logged in as user padmin you find yourself in a restricted Korn shell with only a limited set of commands. You can see all available commands with the command help. All these commands are shell aliases to a single SUID-binary called ioscli which is located in the directory /usr/ios/cli/bin. If you are familiar with AIX you will recognize most commands but most command line parameters differ from the AIX versions.
As there are no man pages available you can see all options for each command separately by issueing the command help . Here is an example for the command lsmap:

help lsmap
Usage: lsmap {-vadapter ServerVirtualAdapter -plc PhysicalLocationCode
-all}
[-net] [-fmt delimiter]
Displays the mapping between physical and virtual devices.
-all Displays mapping for all the server virtual adapter
devices.
-vadapter Specifies the server virtual adapter device
by device name.
-plc Specifies the server virtual adapter device
by physical location code.
-net Specifies supplied device is a virtual server
Ethernet adapter.
-fmt Divides output by a user-specified delimiter.



A very important command is oem_setup_env which gives you access to the regular AIX command line interface. This is provided solely for the installation of OEM device drivers


Virtual SCSI setup

To map a LV 
# mkvg: creates the volume group, where a new LV will be created using the mklv command
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with the LV
# mkvdev: maps the virtual SCSI server adapter to the LV
# lsmap -all: shows the mapping information

To map a physical disk
# lsdev: shows the virtual SCSI server adapters that could be used for mapping with a physical disk
# mkvdev: maps the virtual SCSI server adapter to a physical disk
# lsmap -all: shows the mapping information

Client partition commands 

No commands needed, the Linux kernel is notified immediately

Create new volume group datavg with member disk hdisk1
# mkvg -vg datavg hdisk1

Create new logical volume vdisk0 in volume group
# mklv -lv vdisk0 datavg 10G

Maps the virtual SCSI server adapter to the logical volume
# mkvdev -vdev vdisk0 -vadapter vhost0

Display the mapping information
#lsmap -all

Virtual Ethernet setup

To list all virtual and physical adapters use the lsdev -type adapter command.

$ lsdev -type adapter

name status description
ent0 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent1 Available 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ide0 Available ATA/IDE Controller Device
sisscsia0 Available PCI-X Dual Channel Ultra320 SCSI Adapter
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

Choose the virtual Ethernet adapter we want to map to the physical Ethernet adapter.

$ lsdev -virtualname status description
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
vhost0 Available Virtual SCSI Server Adapter
vhost1 Available Virtual SCSI Server Adapter
vhost2 Available Virtual SCSI Server Adapter
vhost3 Available Virtual SCSI Server Adapter
vsa0 Available LPAR Virtual Serial Adapter

The command mkvdev maps a physical adapter to a virtual adapter, creates a layer 2 network bridge and defines the default virtual adapter with its default VLAN ID. It creates a new Ethernet interface, e.g., ent3.
Make sure the physical and virtual interfaces are unconfigured (down or detached).

Scenario A (one VIO server)
Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1
ent3 Available
en3
et3

This has created a new shared ethernet adapter ent3 (you can verify that with the lsdev command). Now configure the TCP/IP settings for this new shared ethernet adapter (ent3). Please note that you have to specify the interface (en3) and not the adapter (ent3).

$ mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Scenario B (two VIO servers) Create a shared ethernet adapter ent3 with a physical one (ent0) and a virtual one (ent2) with PVID 1:

$ mkvdev -sea ent0 -vadapter ent2 -default ent2 -defaultid 1


Configure the TCP/IP settings for the new shared ethernet adapter (ent3):

$mktcpip -hostname op710-1-vio -inetaddr 9.156.175.231 -interface en3 -netmask 255.255.255.0 -gateway 9.156.175.1 -nsrvaddr 9.64.163.21 -nsrvdomain ibm.com

Client partition commands
No new commands needed just the typical TCP/IP configuration is done on the virtual Ethernet interface that it is defined in the client partition profile on the HMC 

VIO Commands


VIO Server Commands

lsdev –virtual (list all virtual devices on VIO server partitions)
lsmap –all (lists mapping between physical and logical devices)
oem_setup_env (change to OEM [AIX] environment on VIO server)
Create Shared Ethernet Adapter (SEA) on VIO Server

mkvdev –sea{physical adapt} –vadapter {virtual eth adapt} –default {dflt virtual adapt} –defaultid {dflt vlan ID}
SEA Failover
ent0 – GigE adapter
ent1 – Virt Eth VLAN1 (Defined with a priority in the partition profile)
ent2 – Virt Eth VLAN 99 (Control)
mkvdev –sea ent0 –vadapter ent1 –default ent1 –defaultid 1 –attr ha_mode=auto ctl_chan=ent2
(Creates ent3 as the Shared Ethernet Adapter)
Create Virtual Storage Device Mapping

mkvdev –vdev {LV or hdisk} –vadapter {vhost adapt} –dev {virt dev name}
Sharing a Single SAN LUN from Two VIO Servers to a Single VIO Client LPAR
hdisk = SAN LUN (on vioa server)
hdisk4 = SAN LUN (on viob, same LUN as vioa)
chdev –dev hdisk3 –attr reserve_policy=no_reserve (from vioa to prevent a reserve on the disk)
chdev –dev hdisk4 –attr reserve_policy=no_reserve (from viob to prevent a reserve on the disk)
mkvdev –vdev hdisk3 –vadapter vhost0 –dev hdisk3_v (from vioa)
mkvdev –vdev hdisk4 –vadapter vhost0 –dev hdisk4_v (from viob)
VIO Client would see a single LUN with two paths.
spath –l hdiskx (where hdiskx is the newly discovered disk)
This will show two paths, one down vscsi0 and the other down vscsi1.
VIO command from HMC
#viosvrcmd -m -p -c "lsmap -all

(this works only with IBM VIO Server)

see man viosvrcmd for more information

HA VIO server setup


VIO server Detail


HA VIO server setup


Text Box: Product Detail
Micro partitioning is a mainframe inspired technology that is based on two major advances in server virtualization.  Physical processors and IO devices have been virtualized, enabling theses resources to be shared by multiple partitions.
APV/VIO Server
· Can help lower the cost of existing infrastructure
· Can increase business flexibility allowing you to meet anticipated and unanticipated needs
· Can reduce the complexity to grow your infrastructure
Logical partitioning is a server design feature that provides more end-user flexibility by making it possible to run multiple, independent operating system images concurrently on a single server. The introduction of logical partitioning (LPAR) technology to IBM System p systems greatly expanded the options for deploying applications and workloads onto server hardware.
Mixed operating system enviromnent
Phone: +44 7884 164 581

Tidak ada komentar: