Tuesday, September 18, 2012

Netinstall CentOS 6.3 from USB disk

From Windows:
Use Ubootnetin:





From Linux :
If you want to install CentOS from USB disk you simply need to download the netinstall.iso (which is analog to the boot.iso that Red Hat provides).

It seems they made this netinstall.iso a hybrid one. (can be booted from cd and from disk (USB))

So you can just dd it to your USB disk.

To check what device udev made for your inserted USB check last dmesg output

[ 5620.253160] scsi 6:0:0:0: Direct-Access     SanDisk  Cruzer           8.02 PQ: 0 ANSI: 0 CCS
[ 5620.256030] sd 6:0:0:0: Attached scsi generic sg5 type 0
[ 5620.256236] sd 6:0:0:0: [sdd] 7856127 512-byte logical blocks: (4.02 GB/3.74 GiB)
[ 5620.257637] sd 6:0:0:0: [sdd] Write Protect is off
[ 5620.257643] sd 6:0:0:0: [sdd] Mode Sense: 45 00 00 08
[ 5620.258645] sd 6:0:0:0: [sdd] No Caching mode page present
[ 5620.258666] sd 6:0:0:0: [sdd] Assuming drive cache: write through
[ 5620.262683] sd 6:0:0:0: [sdd] No Caching mode page present
[ 5620.262718] sd 6:0:0:0: [sdd] Assuming drive cache: write through
[ 5620.264633]  sdd: sdd1
[ 5620.268622] sd 6:0:0:0: [sdd] No Caching mode page present
[ 5620.268643] sd 6:0:0:0: [sdd] Assuming drive cache: write through
[ 5620.268652] sd 6:0:0:0: [sdd] Attached SCSI removable disk


 Then do the dd:

dd CentOS-6.3-x86_64-netinstall.iso /dev/sdd
What the netinstall iso does, is booting the anaconda installer. It does not contain any repository. In the installer you have to select which repository you want to use. For example: http://linux.mirrors.es.net/centos/6.3/os/x86_64/

Saturday, September 15, 2012

How to rescue missing /boot partition on fedora

How to rescue missing /boot partition.

There are many cases for where the /boot partition can go missing.

It can be accidently erased/corrupted when using dd with a wrong device or in windows (when dual booting), etc

But there is an easy fix to get your /boot partition back without using any backup/restore method!

I tested this in fedora 16, but probably also applicable in all fedora, RHEL, CentOS, Scientific linux releases.

What you have to do is to boot from your fedora 16 install media (DVD).

When booted, choose rescue installed system menu option.

Make sure you enable the network and letting it discover all filesystem in the wizards that follow.
Once the wizards are done, go into shell and use chroot /mnt/sysimage.

First thing you have to try is to mount /boot from the device where it was installed before. If this fails you have to recreate the /boot filesystem:

My /boot was a filesystem on top of dedicated raid mirror so I did:

mdadm -A --scan
mkfs.ext4 /dev/mdo
mount /dev/md0 /boot

(/dev/md0 can be replaced to whatever your /boot filesystem resides on).

Probably your fstab uses the UUID of /boot to mount it. Comment /boot line in fstab for now with vi. We will enable it back once we are booted again.

Next, you need to have the kernel and the initramfs on the /boot filesystem along with grub2 files.
To achieve this run:

yum install kernel (to get kernel and initramfs back)

(you can use yum reinstall kernel also if you do not want the latest available in the repository).

If for any reason your bootloader is broken too you have to reinstall/reconfigure it.

In my case I had grub2 as a bootloader (when using grub consult documentation on how to reinstall this in rescue mode)

In my case I have a mirror raid for /boot, so I reinstall grub2 on the two devices (to be able to boot from the second if the first one fails)

grub2-install /dev/sda
grub2-install /dev/sdb


Regenerate the grub2 config file
grub2-mkconfig -o /boot/grub2/grub.cfg

Eject your fedora16 dvd and boot normally. Voila, your system is up and running again!

In your booted system you still need to add /boot back to fstab (Reason is to not break grub2 tools)

blkid /dev/md0

Replace the returned UUID in fstab

If you have SElinux enabled:

restorecon -vR /boot

So highlevel summary of the steps:
  • Boot into rescue mode
  • Recreate /boot filesystem
  • Reinstall kernel
  • Reinstall grub2 (if applicable)
  • Reconfigure grub2 (if applicable)
  • Reboot

Thursday, August 30, 2012

GPS tracking

I recently discovered the joy of gps tracking.

I use the My Tracks app available on the Google Play market for Android. With this application I can log my journeys and create gpx files.

But what I really wanted to share is a great automation script for converting gpx files to images (png).

The script and some documentation can be downloaded here gpx2png. I use this script in my own little batch script that scans folders recursively with the purpose to automatically creating an image from every gpx file it encountered.

Monday, May 7, 2012

Unpack boot.superboot.img Android boot image

Today I've rooted my HTC One V, guided by this excellent guide.

But I wanted to know what the contents of boot.superboot.img were so I could understand the process better. This post is by no means an explanation of the rooting process, I recommend following reading materials for this: general explanation android rooting.

To view/unpack an android boot.img you first need to download the tools. I tested this on a Scientific Linux release 6.1 (Carbon).

wget https://android-serialport-api.googlecode.com/files/android_bootimg_tools.tar.gz
If you extract this tarball with tar xvzf  android_bootimg_tools.tar.gz You get two binaries: unpackbootimg and  mkbootimg.

(Update: instead of unpackbootimg, you could use perl split_bootimg.pl also)

Use ./unpackbootimg -i <img> -o <outputpath> to unpack to a folder that is created upfront
results:
boot.superboot.img-pagesize
boot.superboot.img-cmdline
boot.superboot.img-base
boot.superboot.img-zImage ---> kernel
boot.superboot.img-ramdisk.gz ---> ramdisk

The interesting part will be in the ramdisk.

To extract the ramdisk in your current directory you can execute following command.

gunzip -c  boot.superboot.img-ramdisk.gz | cpio -i
results:

cwkeys
data
default.prop
dev
init
init.bliss.rc
init.debug_mfgkernel.rc
init.debug_normal.rc
init.goldfish.rc
init.primou.rc
init.rc
init.usb.rc
proc
sbin
superboot --> The contents of this directory will root our phone.
sys
system
ueventd.goldfish.rc
ueventd.primou.rc
ueventd.rc

----
ls superboot/
su superboot.sh Superuser.apk


The superboot.sh basically just copies the su and Superuser.apk to the filesystem, where they can be used by application that require root access.

Note that the su binary has the setuid bit set
-rwsr-sr-x 1 root root 91980 May 6 23:03 /system/xbin/su

Wednesday, April 4, 2012

Online - hot extend of a physical volume on an active volumegroup

Two scenarios will be tested. The goal of this test is to see what options we have to online extend a Volumegroup (Physical Volume) on a VMware Linux guest and to make a summary of the different methods to achieve this goal. In all scenarios a vmdk is online extended as first step, in all our test scenarios the disk to be extended is /dev/sdb. We did not use multipathing software in our test (in case you have a physical machine), but this is stuff for another topic, feel free to make any comments about this.

Both scenarios are tested on 32 bit machines:


Host: ESX 4.1.0 virtual machine version 7
Guest: CentOS release 5.7 - kernel 2.6.18-274.el5

Host: ESX 4.1.0 virtual machine version 4
Guest: SLES10 2.6.16.60-0.85.1-smp

Host: ESX 4.1.0 virtual machine version 4
Guest: SLES11 3.0.13-0.27-pae

Host: ESX 5 virtual machine version 8
 Guest: RHEL6.2 2.6.32-220.el6.i686



Scenario 1: Online extend pv created in first partition of a device (e.g. /dev/sdb1) of an active vg


Steps taken:
  1. Extend disk in vmware
  2. result: fdisk -l /dev/sdb does not show extended size in guest   
  3. echo 1 > /sys/block/sdb/device/rescan or rescan-scsi-bus.sh --forcerescan (only possible on SUSE)
  4. result: fdisk -l /dev/sdb shows extended size of /dev/sdb , now we can extend /dev/sdb1    
  5. do fdisk /dev/sdb, in fdisk execute following steps:
  • delete partition /dev/sdb1
  • create new partition /dev/sdb1 with uses the extended space.
  • put LVM id (8e)
  • write partition table changes to disk
Writing the changes gives the error:

WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot.

Until now the new partition table is written to disk but the kernel is still using the old in-memory partition table (see cat /proc/partitions). If we try to inform the kernel of the new size of /dev/sdb1 with partprobe we get no result.
Following errors were observed:
SLES11:Error: Partition(s) 1 on /dev/sdb have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.
RHEL6:Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot.  

Note: If we do not extend first partition but create a new partition instead (/dev/sdb2) for use as a new physical volume, then this second partition can also not be made visible to the kernel with partprobe if this partition is in use/active. So in effect this is the same problem as just extending the first partition.

The only thing that is holding us back to successfully do a pvresize on /dev/sdb1 is that it is not possible to let the kernel read the updated partition table of a partition table that is in use.

So what to do if you have a physical volume created on a partition (e.g. /dev/sdb1) and you do not want to reboot your server to add some free space in your volumegroup?:
Just do not extend the existing vmdk but add another vmdk to the guest (on the guest rescan scsi bus to see device: echo "- - -" > /sys/class/scsi_host/hostX), use this new vmdk as another pv in your vg (e.g. pvcreate /dev/sdc).  This workaround is simple but can become cluttered if you add to many devices in following up extensions of the volumegroup.

Off course if there is no objection by your business by making the server/filesystem unavailable for some time, you can reboot the server or you can  umount lv's and do a vgchange -a n VG to make the partition not in use by the kernel. After this just do a partprobe to let the kernel use the new partition table, this will be succesful (/proc/partitions will also get updated). Do not forget to make your volumegroup active again ( vgchange -a y VG) before trying to remount your filesystems.


Scenario 2: online extend pv, created directly on the device (/dev/sdb), of an active vg
Steps taken:
  1. Extend disk in vmware
  2. result: fdisk -l /dev/sdb does not show extended size in guest 
  3. blockdev --rereadpt /dev/sdb
  4.  result: fdisk -l /dev/sdb now shows extended size, as well as cat /proc/partitions Note: partprobe was not necessary because we do not have partitions here. I do not know why we do not have to rescan the scsi bus though. 
  5. pvresize /dev/sdb (= succes)

Conclusion

In case of a virtual Linux VMware guest online extend of a physical volume on an active VG is only possible if it is created directly on a disk (e.g. pvcreate /dev/sdb).
If the PV is created on partition, that partition needs to be extended first. This updated partition table can only be read in by the kernel (partprobe) if this partition/disk is not in use.

Wednesday, December 7, 2011

Mysql dump script

For those who are interested I made a quick and dirty mysql dump script.

#Must be run via cron

PASS=<fill in pass>
PATHDUMP=<path to store dumps>
DATE=$(date '+%d-%m-%Y--%s')
EMAIL="<your email address>

mysqldump -u root -p$PASS --all-databases --single-transaction > $PATHDUMP/database_$DATE.sql 2>/tmp/.$$errorscript

if [ $? -eq 0 ];then
echo "MySQL dump database SUCCESSFUL" | mail -s "MySQL dump database SUCCESSFUL" "$EMAIL"
gzip $PATHDUMP/database_$DATE.sql
else
echo "Message is $(cat /tmp/.$$errorscript)" | mail -s "MySQL dump database FAILED" "$EMAIL"
rm -f $PATHDUMP/database_$DATE.sql
fi

#restore with mysql -u root -p$PASS < <unzipped_dump_file>


On Scientific Linux 6.1 make a symbolic link to /etc/cron.daily (will be triggered by anacron)

ln -s <script> /etc/cron.daily

Wednesday, November 2, 2011

Vagrant/puppet based graphite installation on fedora 15

Since short I have a github account. => https://github.com/svenvd

My first "project" on this link is a Vagrant/puppet based graphite auto-installation on a fedora 15 VM