On x86, the boot model relies on the BIOS. x86 systems instead of using a VTOC, use MBR.
X86 counterpart to the OPB’s utility is GRUB.
Solaris 11 uses a modified version of GNU GRUB 0.97.
bootfs locates the file system reader.
kernel$ locates the kernel with additional parameter.
module$ locates the boot archive.
Once you start the boot process, either from OBP or a GRUB, the remaining phases are the same:
- Booter phase: boot archive file gets read, identified by the bootfs variable
- Ramdisk phase: an ISO is mounted and used as a stand-alone, read-only file system. It contains configuration files and drivers.
- Kernel phase: mounts to the ramdisk ans reads driver modules. One of those drivers supports root file system and can attach it to the root device specified. Kernel unmounts from the ramdisk and continues working on the root file system.
The boot archive is a collection of files derived from the full root ( / ) file system of a Solaris 11 instance.
Continue reading “Solaris basics: booting process on x86 systems”
When you install Solaris 11, the system creates the archive by copying key files from /.
The archive must be in sync with root file system; this task is performed automatically during a graceful shutdown process.
SPARC systems rely on OpenBoot (IEEE-1275).
The environment behind is a configurable and programmable session run by the OpenBoot PROM (OPB).
The OPB utility sets up a SPARC system to accept and execute the kernel.
The boot process reduces to four phases:
- OpenBoot PROM
The OpenBoot PROM starts by looking for a file system reader.
If you’re booting from a disk, you first need its layout. The boot disk stores a partition map (VTOC) at sector 0.
OBP will find and load a file reader from sector 1-15 (boot block), then it can find and read the boot archive, a collection of configuration files and drivers.
The boot utility will pass the OPB environment variables to the kernel.
boot -a, you can invoke an interactive session to override the values passed as defaults.
boot -m will override the default running state or logging level:
boot -m milestone=<milestone>
Values for milestones are:
- none (point where you can repair the services facility)
The second option for
boot -m is:
boot -m [quiet | verbose | debug]
reboot -f will apply the Fast Reboot feature (default on x86 but not on SPARC).
While trying to update a CentOS server I got this error:
[root@washingmashine ~]# yum update
error: rpmdb: BDB0113 Thread/process 47226/140411903506496 failed: BDB1507 Thread died in Berkeley DB library
error: db5 error(-30973) from dbenv->failchk: BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
error: cannot open Packages index using db5 - (-30973)
error: cannot open Packages database in /var/lib/rpm
Error: rpmdb open failed
The RPM database seemed corrupted.
First, I deleted the DB:
[root@washingmashine ~]# rm -rf /var/lib/rpm/__db*
Then I used the
rpm --rebuilddb command to rebuild it.
After extending a virtual drive on VMWare, I had to add the additional disk space to the logical volume.
After adding adding the space, you need to resize the file system. On the infrastructure I was working on, I usually use
resize2fs command but this time it didn’t work:
[root@washingmashine ~]# resize2fs /dev/mapper/data-archive
resize2fs 1.44.6 (5-Mar-2019)
resize2fs: Bad magic number in super-block while trying to open
Couldn't find valid filesystem superblock.
The resize2fs program will resize ext2, ext3, or ext4 file systems; I took for granted that LVM was using a ext4 fs but I was wrong:
[root@washingmashine ~]# mount | grep data-archive
/dev/mapper/data-archive on / type xfs (rw,relatime,...
XFS fs has its own command set; in this case I had to use the
[root@washingmashine ~]# xfs_growfs /dev/mapper/data-archive
File system resized!
After upgrading to kernel-3.10.0-957.21.3.el7 on a CentOS server, I experienced connection timeout issues on Windows servers trying to access SMB shares. On the contrary, I was able to access the share using a Linux system without any problem.
The bug was reported in CentOS Bug Tracker and it’s caused by one of the patches applied to address CVE-2019-11478.
Some applications set tiny SO_SNDBUF values and expect TCP to just work.
Recent patches to address CVE-2019-11478 broke them in case of losses, since re-transmits might be prevented.
To (temporarily) fix this issue, I increased
SO_SNDBUF value in
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=65536 SO_SNDBUF=65536