Configuring and Installing Solaris 8 and Solaris 9 ( Containers / Zones )

Preparation

Installation

The following procedure is for Solaris 8 containers only. Follow the same steps to install Solaris 9
container

Step 1
  • Install Solaris 10 5/08

       # cat /etc/release
                           Solaris 10 5/08 s10s_u5wos_10 SPARC
               Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
                            Use is subject to license terms.
                                 Assembled 24 March 2008
      
  • Install solaris 8 container packages

      # pkgadd -d SUNWs8brandr
      # pkgadd -d SUNWs8brandu
      # pkgadd -d SUNWs8p2v
      
Step 2
  • Create zone for Solaris8

In this example zone name is “zone-s8” and “130.15.241.89” is used
as the IP address

 # mkdir /export/home/zone-s8
 # chmod go-rx /export/home/zone-s8
 # zonecfg -z s8-zone
	s8-zone: No such zone configured
	Use 'create' to begin configuring a new zone.
	zonecfg:s8-zone> create -t SUNWsolaris8
	zonecfg:s8-zone> set zonepath=/export/home/zone-s8
	zonecfg:s8-zone> set autoboot=true
	zonecfg:s8-zone> add net
	zonecfg:s8-zone:net> set address=130.15.241.89
	zonecfg:s8-zone:net> set physical=ipge0
	zonecfg:s8-zone:net> end
        zonecfg:s8-zone> verify
      	zonecfg:s8-zone> commit
	zonecfg:s8-zone> exit
  • Install S8 image in this zone

NOTE: This step uses the sample Solaris 8 image(solaris8-image.flar) which is included in
Solaris 8 Container Software.If you want to migrate your existing S8 system into the
container,then you must create an archive of your S8 system. Refer here

 # zoneadm -z zone-s8 install -u -a solaris8-image.flar
      Log File: /var/tmp/zone-s8.install.1888.log
            Source: /workspace/vs145730/etude/s8containers_pkg/solaris8-image.flar
     Installing: This may take several minutes...
   Postprocessing: This may take several minutes...
  Log File: /export/home/zone-s8/root/var/log/zone-s8.install.1888.log
 #
      
  • Boot the zone

     # zoneadm -z zone-s8 boot
    
  • Check if zone is running

     # zoneadm list -cv
      ID NAME          STATUS     PATH                       BRAND    IP
       0 global        running    /                          native   shared
      15 zone-s8       running    /export/home/zone-s8       solaris8 shared
    
    

In another window; login to the console for initial zone configuration

 # zlogin -C zone-s8
 

Follow interactive menu for zone configuration. After configuration is complete, the zone will
automatically reboot

  • Login to the zone

     # zlogin zone-s8
     [Connected to zone 'zone-s8' pts/7]
     Last login: Wed Jun  4 13:02:31 on pts/1
     Sun Microsystems Inc.   SunOS 5.8       Generic Patch   February 2004
     #
     # uname -a
     SunOS zone-s8 5.8 Generic_Virtual sun4v sparc SUNW,Sun-Fire-T200
     #
     # cat /etc/release
                           Solaris 8 2/04 s28s_hw4wos_05a SPARC
               Copyright 2004 Sun Microsystems, Inc.  All Rights Reserved.
                                Assembled 08 January 2004
     #
    
    

Sun T3-1 / T3-2 / T3-4 Server RAID configration

1) First find the right controller.  You can use the built in device alias.   Tip:  Use ‘devalias’ to show the devices to find the scsi controllers:

{0} ok> devalias
  disk7                    /pci@400/pci@2/pci@0/pci@4/scsi@0/disk@p3
  disk6                    /pci@400/pci@2/pci@0/pci@4/scsi@0/disk@p2
  disk5                    /pci@400/pci@2/pci@0/pci@4/scsi@0/disk@p1
  disk4                    /pci@400/pci@2/pci@0/pci@4/scsi@0/disk@p0
  cdrom                    /pci@400/pci@2/pci@0/pci@4/scsi@0/disk@p6
  scsi1                    /pci@400/pci@2/pci@0/pci@4/scsi@0
  disk3                    /pci@400/pci@1/pci@0/pci@4/scsi@0/disk@p3
  disk2                    /pci@400/pci@1/pci@0/pci@4/scsi@0/disk@p2
  disk1                    /pci@400/pci@1/pci@0/pci@4/scsi@0/disk@p1
  disk0                    /pci@400/pci@1/pci@0/pci@4/scsi@0/disk@p0
  disk                     /pci@400/pci@1/pci@0/pci@4/scsi@0/disk@p0
  scsi0                    /pci@400/pci@1/pci@0/pci@4/scsi@0

2) Note if the drives are not in HDD0 and HDD1 slot you may have to choose the other scsi1 controller.   Physically check your drive configuration.  Recommendation:  Ensure HDD’s placed in slot 0 and 1 before setting up RAID.  Click here for T3-1 HDD drive configuration schematic.

You must first use the ‘select’ command to tag the controller to be worked.  Example below uses the shortcut alias named ‘scsi0’ to reference the device path shown in (1) above.

{0} ok> select scsi0

3) Once selected, then following command will show the controller’s child devices:

{0} ok> show-children

  FCode Version 1.00.54, MPT Version 2.00, Firmware Version 5.00.17.00

  Target 9
    Unit 0   Disk   HITACHI  H103030SCSUN300G A2A8    585937500 Blocks, 300 GB
    SASDeviceName 5000cca015215698  SASAddress 5000cca015215699  PhyNum 0
  Target a
    Unit 0   Disk   HITACHI  H103030SCSUN300G A2A8    585937500 Blocks, 300 GB
    SASDeviceName 5000cca015216608  SASAddress 5000cca015216609  PhyNum 1

4) If the disk devices appear like above then ok to create the raid volume.  To make mirror boot drive, create as raid 1 device.  Note if both disks do not show like in (3) above then go back to (2) above and try select scsi1.  If still trouble then HDD’s may be in incompatible slot.

To create raid 1 volume:

{0} ok> 9 a create-raid1-volume
  Target 9 size is 583983104 Blocks, 298 GB
   Target a size is 583983104 Blocks, 298 GB
   The volume can be any size from 1 MB to 285148 MB

What size do you want?  [285148]
   Volume size will be 583983104 Blocks, 298 GB
   Enter a volume name:  [0 to 15 characters] raidvol0
   Volume has been created

5) Re-show the children.  Should see a single raid volume:

{0} ok> show-children

FCode Version 1.00.54, MPT Version 2.00, Firmware Version 5.00.17.00

Target 389 Volume 0
Unit 0   Disk   LSI      Logical Volume   3000    583983104 Blocks, 298 GB
VolumeDeviceName 30c25ab5985d6551  VolumeWWID 00c25ab5985d6551

Reset machine prior to jumping:

ok> reset-all

Tip: Send a break to get back to ok> prompt
o From digi console, can send a break via CTRL-p and select ‘b’.
o From ILOM console, can send a break via:   set /HOST send_break_action=break

6) Now it’s ok to boot install the O/S as normal:

{0} ok boot net:dhcp - install

Solaris: Zone memory capping

First you have to have created a zone with memory capping enabled. This would be done during the zonecfg setup:

zonecfg:s8-zone-solar97> add capped-memory
zonecfg:zone:capped-memory> set physical=50m
zonecfg:zone:capped-memory> set swap=100m
zonecfg:zone:capped-memory> set locked=30m
zonecfg:zone:capped-memory> end

Once you zone is configured installed and running, you can view the resources of a zone:

# /bin/prctl -n zone.max-swap `pgrep -z <zone> init`
process: 999: /sbin/init
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
zone.max-swap
        privileged      100.0MB      -   deny                                -
        system          16.0EB     max   deny                                -
# /bin/prctl -n zone.max-locked-memory `pgrep -z <zone> init`
process: 999: /sbin/init
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
zone.max-locked-memory
        privileged      30.0MB      -   deny                                 -
        system          16.0EB    max   deny                                 -
# rcapstat -z 1 1
id zone            nproc    vm   rss   cap    at avgat    pg avgpg
2 <zone>            -      48M   36M   50M    0K    0K    0K    0K

To change the max-swap resource do the following:

# prctl -n zone.max-swap -r -v 200M `pgrep -z  <zone> init`

To change the max-locked-memory resource do the following:

# prctl -n zone.max-locked-memory -r -v 100M `pgrep -z  <zone> init`

Changing the physical memory capping is a little different, you’ll need to use the rcapadm command:

# rcapadm -z <zone> -m 100M

Then to view all the resources again, you should see the changes:

# /bin/prctl -n zone.max-swap `pgrep -z <zone> init`
process: 999: /sbin/init
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
zone.max-swap
       privileged      200.0MB      -   deny                                 -
       system          16.0EB     max   deny                                 -
# /bin/prctl -n zone.max-locked-memory `pgrep -z <zone> init`
process: 999: /sbin/init
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
zone.max-locked-memory
        privileged      100.0MB     -   deny                                 -
        system          16.0EB    max   deny                                 -
# rcapstat -z 1 1
id zone            nproc    vm   rss   cap    at avgat    pg avgpg
2 <zone>            -      48M   36M   100M   0K    0K    0K    0K

That’s it. To make the changes permanent, you’ll need to go into zonecfg and adjust the resources that way.

# zonecfg -z <zone>
zonecfg:s8-zone-solar97> select capped-memory
zonecfg:zone:capped-memory> set physical=100m
zonecfg:zone:capped-memory> set swap=200m
zonecfg:zone:capped-memory> set locked=100m
zonecfg:zone:capped-memory> end
zonecfg:zone:> commit

This will save the zone configuration file so the next time the zone boots the memory limit will be set, otherwise the changes are only temporary.

Solaris nfs mount: mount: Not owner

If you have ever tried to mount a Linux NFS share on a Solaris workstation you may have seen the below error.

solaris1# mount t3-61.abc.com:/exports/homes /mnt
nfs mount: mount: /mnt: Not owner

This may be caused by the Solaris system attempting to mount the exported file system running NFS version 4 instead of version 3. The problem can easily be corrected by passing the vers=3 option when mounting the file system.

mount -o vers=3 t3-61.abc.com:/exports/homes /mnt

Additionally you could add the option into the auto_master map if the problem is coming from automount operations.

/mount_point	map_name 	-rw,vers=3

If you want the change to be system wide you could edit the /etc/default/nfs file and set the max version:

NFS_CLIENT_VERSMAX=3

How to check number of "Physical CPU and core"?

number of physical cpu: "psrinfo -p"
number of cores: "kstat cpu_info|grep core_id|sort -u|wc -l"
number of threads: "psrinfo -pv"
Code:
# echo "`psrinfo -p` socket(s)"
2 socket(s)
Code:
# echo "`kstat -m cpu_info|grep -w core_id|uniq|wc -l` core(s) "
8 core(s)
Code:
# echo "`psrinfo|wc -l` logical (virtual) processor(s)"
64 logical (virtual) processor(s)

so (core(s) with 8 thread)

eg.

root@t31 # echo “`psrinfo -p` socket(s)”
1 socket(s)
root@t31 #  echo “`kstat -m cpu_info|grep -w core_id|uniq|wc -l` core(s) “
16 core(s)
root@t31 # echo “`psrinfo|wc -l` logical (virtual) processor(s)”
128 logical (virtual) processor(s)
root@t31 #