Solaris
Installation
- Oracle CBE (Common Build Environmnet) : Not for production
- eg: 11.4-11.4.42.0.0.111.0
- SRU (Support Repository Update) for production
- eg: 11.4.11.4.42.0.1.113.1
see in /etc/os-release
CBE does not install a desktop. To do this after a text install, change the repository location:
pkg set-publisher -G'*' -g http://pkg.oracle.com/solaris/release/ solaris
Check the online package, then install:
pkg info -r solaris-desktop pkg install solaris-desktop
Packages
Search for available gcc package, then install:
pkg search gcc | grep "C++ Compiler" pkg install gcc-c++ pkg install python-39 pkg uninstall something
MySQL
Install and start:
pkg install mysql svcadm enable mysql
VirtualBox
pkg install runtime/python-39 pkgadd -d VirtualBox-7.0.8-SunOS-amd64-r156879.pkg
General
Booting: x86
Into single-user mode:
- In grub menu, edit entry
- On $multiboot line, add "-s" to end
- CTRL-X to boot
Show Grub boot options:
bootadm list-menu
Set default menu option to second one:
bootadm set-menu default=1
Set the timeout:
bootadm set-menu timeout=10
Booting: OpenBoot
- ok> prompt: STOP-A or BRK
banner reset-all probe-ide probe-scsi devaliases printenv boot-device setenv boot-device disk reset
Package Management
Show package publisher:
pkg publisher
Show us only the packages for which newer versions are available:
pkg info -u
Update:
pkg update
Show SRU installed (look at Branch and Packaging Date):
pkg info entire
Search for a package matching "ucb":
# pkg search ucb INDEX ACTION VALUE PACKAGE basename file usr/share/groff/1.22.3/font/devlj4/UCB pkg:/text/[email protected] basename dir usr/ucb pkg:/legacy/compatibility/[email protected] pkg.fmri set solaris/compatibility/ucb pkg:/compatibility/[email protected] pkg.fmri set solaris/legacy/compatibility/ucb pkg:/legacy/compatibility/[email protected] # pkg install pkg:/compatibility/[email protected]
Services
List all enabled services (-a also shows disabled):
svcs
Show long list about one service:
# svcs -l apache24 fmri svc:/network/http:apache24 name Apache 2.4 HTTP server enabled true state online next_state none state_time Mon Nov 12 16:22:58 2018 logfile /var/svc/log/network-http:apache24.log restarter svc:/system/svc/restarter:default contract_id 2017 manifest /lib/svc/manifest/network/http-apache24.xml dependency optional_all/error svc:/system/filesystem/autofs:default (online) dependency require_all/none svc:/system/filesystem/local:default (online) dependency require_all/error svc:/milestone/network:default (online)
Enable a service:
svcadm enable apache24
User Management
To give user ability to su to root:
- /etc/user_attr.d/local-entries
To show status and unlock:
passwd -s passwd -u someuser
To stop account lockout:
usermod -K lock_after_retries=no someuser
iSCSI initiator (Static)
Check initiator service is up:
svcs network/iscsi/initiator
Add IP of storage system (use default port 3260):
iscsiadm add static-config iqn.2000-01.com.example:initiator01, 192.0.2.2:3260
Check targets:
iscsiadm list static-config
CHAPS enable:
iscsiadm modify initiator-node --authentication CHAP
Set user, and secret (password):
iscsiadm modify initiator-node --CHAP-name someuser iscsiadm modify initiator-node --CHAP-secret Enter CHAP secret: ************ Re-enter secret: ************
Enable:
iscsiadm modify discovery --static enable
Show initiator status:
iscsiadm list initiator-node iscsiadm list target iscsiadm list target-param -v
Show iSCSI disks:
iscsiadm list target -S | grep "OS Device Name"
See also: Oracle Docs
Kerberos
Client: kclient
Networking
Check status:
dladm show-link dladm show-ether
Show hostname:
svccfg -s system/identity:node listprop config
Set hostname:
svccfg -s system/identity:node setprop config/nodename="my-sol-host" svccfg -s system/identity:node setprop config/loopback="localhost
NTP
Client:
cd /etc/inet; cp ntp.client > ntp.conf
(edit file)
svcadm enable ntp svcadm restart ntp
Reset root password
- Boot from CD
- Select option 3: Shell
Check availability of rpool (none expected):
zpool list
Import rpool:
zpool import -f -R /a rpool
df -h should show some filesystems under /a
Show zfs filesystems, check for root/ROOT/...
zfs list
Set mount point for root filesystem:
zfs set mountpoint=/mnt_tmp rpool/ROOT/11.4-11.4.31.0.1.88.5
Check new entry under /mnt/tmp has been added:
zfs list
Mount filesystem:
zfs mount rpool/ROOT/11.4-11.4.31.0.1.88.5
Remove password hash from /a/mnt_tmp/etc/shadow
Reset mount point:
zfs umount rpool/ROOT/11.4-11.4.31.0.1.88.5 zfs set mountpoint=/ rpool/ROOT/11.4-11.4.31.0.1.88.5 zpool export rpool
- Reboot server
- edit grub menu ("e")
- on line starting $multiboot, append "-s" option for single-user mode
- enter "root" and once in shell, change root password
- reboot
Resource Pools
Disks can be listed and formatted with:
format
Will show at least the root pool (rpool):
zpool list zpool status
Show zfs file systems:
zfs list
Create a new pool from one device (file, or disk device):
zpool create pool1 /root/disk1 zpool list pool1 zfs list pool1
Add a second disk, and zfs capacity expands automatically:
zpool add pool1 /root/disk2
Remove a pool:
zpool destroy pool1
Create a mirror:
zpool create pool1 mirror /root/disk1 /root/disk2
Check for errors:
zpool scrub pool1
Remove a disk:
zpool detatch pool1 /root/disk1
Add a new disk ("silver" the mirror disk2 > 1):
zpool attach pool1 /root/disk2 /root/disk1
Make a bigger RAID:
zpool create pool1 raidz /root/disk1 /root/disk2 /root/disk3 /root/disk4
Role Based Authentication
List profiles for a user:
profiles -l user1
Create a new profile (local files, not LDAP):
profile -p ChangePasswords -S files > set desc="Allow changing of passwords" > set auth=solaris.passwd.assign,solaris.account.activate > info > verify > exit
Update a user to be assigned the new profile:
usermod +P ChangePasswords user1
Profiles are stored locally in:
- /etc/security/prof_attr
Zones
Oracle Docs:
Check zfs:
zfs list | grep zones
Configuring a zone:
root@npgs-solaris:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1> create create: Using system default template 'SYSdefault' zonecfg:zone1> set autoboot=true zonecfg:zone1> set bootargs="-m verbose" zonecfg:zone1> verify zonecfg:zone1> commit zonecfg:zone1> exit root@npgs-solaris:~#
List config:
zonecfg -z z2 info
root@npgs-solaris:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /system/zones/zone1 solaris excl
Install zone:
root@npgs-solaris:~# zoneadm -z zone1 install The following ZFS file system(s) have been created: rpool/VARSHARE/zones/zone1 Progress being logged to /var/log/zones/zoneadm.20181109T163221Z.zone1.install Image: Preparing at /system/zones/zone1/root. Install Log: /system/volatile/install.25403/install_log AI Manifest: /tmp/manifest.xml.5c4vcb SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.oracle.com/solaris/release/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 415/415 65388/65388 428.2/428.2 507k/s PHASE ITEMS Installing new actions 89400/89400 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 Installation: Succeeded done. Done: Installation completed in 1328.592 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /system/zones/zone1/root/var/log/zones/zoneadm.20181109T163221Z.zone1.install
Start the zone:
zoneadm -z zone1 boot
Login to the zone console (disconnect with ~.) and finish setup with UI:
zlogin -C zone1
Check status:
zoneadm list -v
Show config:
zonecfg -z zone1 info -a zoneadm list -ip
Shutdown a zone:
zoneadm -z zone1 shutdown
Networking
By default, new zones are created with an exclusive IP network resource: a zone has access to a complete network stack, eg has its own IP address and routing.
A network resource called anet with the following properties was automatically created:
ip-type is exclusive linkname is net0 lower-link is auto mac-address is random link-protection is mac-nospoof
Confirm with:
zonecfg -z z1 info -a
This exists when the zone is running.
ipadm dladm show-link
Setting resource limits
Dedicated CPUs (set min 1, max 3: requres svc:/system/pools/dynamic to be enabled) to a zone:
# zonecfg -z zone1 zonecfg:zone1> add dedicated-cpu zonecfg:zone1:dedicated-cpu> set ncpus=1-3 zonecfg:zone1:dedicated-cpu> end zonecfg:zone1> verify zonecfg:zone1> commit zonecfg:zone1> exit
("select" to enter a resource once it exists. "remove" to delete)
CPU caps (alternative to dedicated CPUs) can offer finer grained control. Set CPU cap (proportion guaranteed if there is contention), eg 150% of one CPU:
# zonecfg -z zone1 zonecfg:zone1> add capped-cpu zonecfg:zone1:capped-cpu> set ncpus=1.5 zonecfg:zone1:capped-cpu> end
Set Memory cap:
zonecfg:zone1> add capped-memory zonecfg:zone1:capped-memory> set physical=512m zonecfg:zone1:capped-memory> set swap=1024m zonecfg:zone1:capped-memory> set locked=128m zonecfg:zone1:capped-memory> end
Zones can be made immutable with the file-mac-profile property:
- none
- Normal read/write
- strict
- Read-only file system, no exceptions.
- fixed-configuration
- Permits updates to /var/* directories, with the exception of directories that contain system configuration components: IPS packages, including new packages, cannot be installed. Persistently enabled SMF services are fixed. SMF manifests cannot be added from the default locations. Logging and auditing configuration files can be local. syslog and audit configuration are fixed.
- flexible-configuration
- Permits modification of files in /etc/* directories, changes to root's home directory, and updates to /var/* directories. IPS packages, including new packages, cannot be installed. Persistently enabled SMF services are fixed. SMF manifests cannot be added from the default locations. Logging and auditing configuration files can be local. syslog and audit configuration can be changed.
The mutability setting can be observed:
# zoneadm list -p 0:global:running:/::solaris:shared:-:none:: 1:z2:running:/system/zones/zone2:e4755797-169b-4f5b-b016-a28ccfbff24a:solaris:excl:-:none:: 4:z1:running:/system/zones/zone1:f1784415-3fe3-4cca-8f23-ac7c9180664f:solaris:excl:R:fixed-configuration::
Here, zone1 is immutable.
Creating a template
Create a template based on zone "z1":
zlogin z1 sysconfig create-profile -o /root/z1-template
A configuration file will be created at z2-template/sc_profile.xml. From the global zone, stop z1.
zonecfg -z z1 export -f /zones/z2-profile
Copy the system configuration template:
cp /system/zones/z1/root/root/z2-template/sc_profile.xml z2-template.xml
Create zone 2 based on the z1 template:
zonecfg -z z2 -f /root/z2-profile zoneadm -z 22 clone -c /root/z2-template.xml z1