Mabuhay

Hello world! This is it. I've always wanted to blog. I don't want no fame but just to let myself heard. No! Just to express myself. So, I don't really care if someone believes in what I'm going to write here nor if ever someone gets interested reading it. My blogs may be a novel-like, a one-liner, it doesn't matter. Still, I'm willing to listen to your views, as long as it justifies mine... Well, enjoy your stay, and I hope you'll learn something new because I just did and sharing it with you.. Welcome!

Monday, July 21, 2008

LUN: POWERFAILED

I did encounter this error eons ago but never really had the chance to write about it.

Anyway, I'll be using some inputs provided by my colleagues. Please take note that this is merely for conversion or identifying the disk that have a power failure.

From the /var/adm/syslog.log:
...
Jul 20 21:54:50 server4385 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x0000000048753840), from raw device 0x1f060100 (with priority: 0, and current flags: 0x40) to raw device 0x1f078100 (with priority: 1, and current flags: 0x0).
Jul 20 21:54:50 server4385 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x0000000048753840), from raw device 0x1f060100 (with priority: 0, and current flags: 0x40) to raw device 0x1f078100 (with priority: 1, and current flags: 0x0).
Jul 20 21:54:50 server4385 vmunix: LVM: Restored PV 1 to VG 1.
Jul 20 21:54:50 server4385 vmunix: LVM: Restored PV 1 to VG 1.
Jul 20 21:54:54 server385 vmunix: LVM: vg[1]: pvnum=1 (dev_t=0x1f078100) is POWERFAILED
Jul 20 21:54:54 server4385 vmunix: LVM: vg[1]: pvnum=1 (dev_t=0x1f078100) is POWERFAILED
Jul 20 21:55:04 server4385 vmunix: LVM: Recovered Path (device 0x1f060100) to PV 1 in VG 1.
Jul 20 21:55:04 server4385 vmunix: LVM: Recovered Path (device 0x1f060100) to PV 1 in VG 1.
Jul 20 21:55:04 server4385 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x0000000048753840), from raw device 0x1f078100 (with priority: 1, and current flags: 0x0) to raw device 0x1f060100 (with priority: 0, and current flags: 0x80).
Jul 20 21:55:04 server4385 vmunix: LVM: Performed a switch for Lun ID = 0 (pv = 0x0000000048753840), from raw device 0x1f078100 (with priority: 1, and current flags: 0x0) to raw device 0x1f060100 (with priority: 0, and current flags: 0x80).
...


We'll use the entry "dev_t=0x1f078100". To convert to the exact disk, take the last six (6) digits, e.g., 078100, and check from /dev/dsk:

# ll /dev/dsk | grep 078100
brw-r----- 1 bin sys 31 0x078100 Apr 19 23:47 c7t8d1

Now, this represents the device file for the disk. Since this is being used for LVM, we can use pvdisplay, vgdisplay, or lvdisplay to check partly on the status of the data written on it.

VxVM - Solaris

Well, well, well. What have we got here?! A FS extension for a Solaris box. Hmmm, okay! Let me check some old notes.

Nothing fancy really. Just like any ordinary request but got to consider some rules since it involves rootdg, which is a bit sensitive. Every move must be evaluated. Remember: it's a root disk!


[root@server254:/etc/vx/bin]
# vxdg list
NAME STATE ID
rootdg enabled 1052259816.1025.server254

[root@server254:/etc/vx/bin]
# vxassist -g rootdg maxsize
Maximum volume size: 8192 (4Mb)

The above output shows free space of around 32GB for the
dg.

To extend [m after 150 signifies the value in MB]:

[root@server254:/etc/vx/bin]
# /etc/vx/bin/vxresize -g rootdg volume_name 150m



Anyway, I found this useful link for UNIX administration: http://www.hyborian.demon.co.uk/notes/

Thursday, July 17, 2008

Ignite error anew

Chapter 19. After Antonia received a beating from a Brigatisti, Beowulf Agate saved and brought her to a doctor. I was reading this novel when Maki asked if I encountered the error below:

[root@server21:/opt/ignite/bin]
# /opt/ignite/bin/make_net_recovery -s ignite_server -x inc_entire=vg00
* Creating NFS mount directories for configuration files.
======= 07/16/08 23:12:49 EDT Started /opt/ignite/bin/make_net_recovery. (Wed
Jul 16 23:12:49 EDT 2008)
@(#)Ignite-UX Revision C.7.4.157
@(#)ignite/net_recovery (opt) Revision:
/branches/IUX_RA0712/ignite/src@72068 Last Modified: 2007-11-01 14:16:06 -0600 (Thu, 01 Nov 2007)

* Testing for necessary pax patch.
* Checking Versions of Recovery Tools
* Creating System Configuration.
* /opt/ignite/bin/save_config -f /var/opt/ignite/recovery/client_mnt/0x0
00E7FED2223/recovery/2008-07-16,23:12
/system_cfg vg00/opt/ignite/bin/save_config[16]: lssf: Execute permission denied.
save_config: Error - unknown disk type for /dev/dsk/c0t6d0s2, not SCSI or HPFL
awk: Cannot find or open file /var/tmp/swapinfo.tmp.
The source line number is 1.
awk: Cannot find or open file /var/tmp/swapinfo.tmp.
The source line number is 1.
save_config: Error - cannot determine primary swap size
ERROR: /opt/ignite/bin/save_config failed
======= 07/16/08 23:13:26 EDT make_net_recovery completed unsuccessfully

[root@server21:/opt/ignite/bin]
#

Well the first course of action is to check on the execution of `lssf` [which btw, resolved the issue - check with the permission or file rule on this: "SEC team"]. However, just out of curiosity, I checked on the swap. It showed 3 but upon checking on the /etc/fstab entries, it listed only 2.?. So, here comes the experiment, I added the missing and activated it. Then tried to run the ignite backup again. But, to no avail. Still getting the same error. Well, see, I learned.

Monday, July 14, 2008

NFS

I thought it was just another NFS request. Well, technically, it is. What I meant was another mount, and there it goes. But, I was wrong [again]! Just like the other aspect of my life. Hmm, forget it!

Ok. Let's check on the request.

Request: Mount /var/opt/edis/idocs/directory1 to server2 from box1.

Simple? May be.

Steps:
1. Entry was added to exports
2. run `exportfs -a` but error was generated:
parent directory already exported

Several attempts [solution] were tried but none succeeded.
1. comment the parent directory
2. manually exporting the child directory

A colleague thought of a brilliant idea. It was a dull day so I never bother to explore such. Anyway, he suggested to add the client to the parent directory's list, as well as make a root entry for it, then export the FS. Successful, at least the initial part.

Login to the client side. Made a copy of the fstab [a great practice, just in case a stupid me does a stupid thing]. Then add it as:

# cp -p /etc/fstab /etc/fstab.backup.`date`
# vi /etc/fstab

...
/var/opt/edis/idocs/directory1 /var/opt/edis/idocs/directory1 nfs rw ...
...
~
~


The entry added for the client was specific. It means, although the "whole" parent directory was exported, I think it is possible to just put the specific directory to be accessed. At least for this case. I think, no! I need to review more about NFS. Adios!

World Clock