Solaris: Difference between revisions
NickPGSmith (talk | contribs) |
NickPGSmith (talk | contribs) |
||
Line 5: | Line 5: | ||
* SRU (Support Repository Update) for production | * SRU (Support Repository Update) for production | ||
** eg: 11.4.11.4.42.0.1.113.1 | ** eg: 11.4.11.4.42.0.1.113.1 | ||
see in /etc/os-release | |||
CBE does not install a desktop. To do this after a text install, change the repository location: | CBE does not install a desktop. To do this after a text install, change the repository location: |
Revision as of 07:55, 6 July 2023
Installation
- Oracle CBE (Common Build Environmnet) : Not for production
- eg: 11.4-11.4.42.0.0.111.0
- SRU (Support Repository Update) for production
- eg: 11.4.11.4.42.0.1.113.1
see in /etc/os-release
CBE does not install a desktop. To do this after a text install, change the repository location:
pkg set-publisher -G'*' -g http://pkg.oracle.com/solaris/release/ solaris
Check the online package, then install:
pkg info -r solaris-desktop pkg install solaris-desktop
VirtualBox
pkg install runtime/python-39 pkgadd -d VirtualBox-7.0.8-SunOS-amd64-r156879.pkg
General
Booting: x86
Into single-user mode:
- In grub menu, edit entry
- On $multiboot line, add "-s" to end
- CTRL-X to boot
Show Grub boot options:
bootadm list-menu
Set default menu option to second one:
bootadm set-menu default=1
Set the timeout:
bootadm set-menu timeout=10
Booting: OpenBoot
- ok> prompt: STOP-A or BRK
banner reset-all probe-ide probe-scsi devaliases printenv boot-device setenv boot-device disk reset
Package Management
Show package publisher:
pkg publisher
Show us only the packages for which newer versions are available:
pkg info -u
Update:
pkg update
Show SRU installed (look at Branch and Packaging Date):
pkg info entire
Search for a package matching "ucb":
# pkg search ucb INDEX ACTION VALUE PACKAGE basename file usr/share/groff/1.22.3/font/devlj4/UCB pkg:/text/[email protected] basename dir usr/ucb pkg:/legacy/compatibility/[email protected] pkg.fmri set solaris/compatibility/ucb pkg:/compatibility/[email protected] pkg.fmri set solaris/legacy/compatibility/ucb pkg:/legacy/compatibility/[email protected] # pkg install pkg:/compatibility/[email protected]
Services
List all enabled services (-a also shows disabled):
svcs
Show long list about one service:
# svcs -l apache24 fmri svc:/network/http:apache24 name Apache 2.4 HTTP server enabled true state online next_state none state_time Mon Nov 12 16:22:58 2018 logfile /var/svc/log/network-http:apache24.log restarter svc:/system/svc/restarter:default contract_id 2017 manifest /lib/svc/manifest/network/http-apache24.xml dependency optional_all/error svc:/system/filesystem/autofs:default (online) dependency require_all/none svc:/system/filesystem/local:default (online) dependency require_all/error svc:/milestone/network:default (online)
Enable a service:
svcadm enable apache24
User Management
To give user ability to su to root:
- /etc/user_attr.d/local-entries
To show status and unlock:
passwd -s passwd -u someuser
To stop account lockout:
usermod -K lock_after_retries=no someuser
iSCSI initiator (Static)
Check initiator service is up:
svcs network/iscsi/initiator
Add IP of storage system (use default port 3260):
iscsiadm add static-config iqn.2000-01.com.example:initiator01, 192.0.2.2:3260
Check targets:
iscsiadm list static-config
CHAPS enable:
iscsiadm modify initiator-node --authentication CHAP
Set user, and secret (password):
iscsiadm modify initiator-node --CHAP-name someuser iscsiadm modify initiator-node --CHAP-secret Enter CHAP secret: ************ Re-enter secret: ************
Enable:
iscsiadm modify discovery --static enable
Show initiator status:
iscsiadm list initiator-node iscsiadm list target iscsiadm list target-param -v
Show iSCSI disks:
iscsiadm list target -S | grep "OS Device Name"
See also: Oracle Docs
Kerberos
Client: kclient
Networking
Check status:
dladm show-link
Show hostname:
svccfg -s system/identity:node listprop config
Set hostname:
svccfg -s system/identity:node setprop config/nodename="my-sol-host" svccfg -s system/identity:node setprop config/loopback="localhost
NTP
Client:
cd /etc/inet; cp ntp.client > ntp.conf
(edit file)
svcadm enable ntp svcadm start ntp
Reset root password
- Boot from CD
- Select option 3: Shell
Check availability of rpool (none expected):
zpool list
Import rpool:
zpool import -f -R /a rpool
df -h should show some filesystems under /a
Show zfs filesystems, check for root/ROOT/...
zfs list
Set mount point for root filesystem:
zfs set mountpoint=/mnt_tmp rpool/ROOT/11.4-11.4.31.0.1.88.5
Check new entry under /mnt/tmp has been added:
zfs list
Mount filesystem:
zfs mount rpool/ROOT/11.4-11.4.31.0.1.88.5
Remove password hash from /a/mnt_tmp/etc/shadow
Reset mount point:
zfs umount rpool/ROOT/11.4-11.4.31.0.1.88.5 zfs set mountpoint=/ rpool/ROOT/11.4-11.4.31.0.1.88.5 zpool export rpool
- Reboot server
- edit grub menu ("e")
- on line starting $multiboot, append "-s" option for single-user mode
- enter "root" and once in shell, change root password
- reboot
Resource Pools
Disks can be listed and formatted with:
format
Will show at least the root pool (rpool):
zpool list zpool status
Show zfs file systems:
zfs list
Create a new pool from one device (file, or disk device):
zpool create pool1 /root/disk1 zpool list pool1 zfs list pool1
Add a second disk, and zfs capacity expands automatically:
zpool add pool1 /root/disk2
Remove a pool:
zpool destroy pool1
Create a mirror:
zpool create pool1 mirror /root/disk1 /root/disk2
Check for errors:
zpool scrub pool1
Remove a disk:
zpool detatch pool1 /root/disk1
Add a new disk ("silver" the mirror disk2 > 1):
zpool attach pool1 /root/disk2 /root/disk1
Make a bigger RAID:
zpool create pool1 raidz /root/disk1 /root/disk2 /root/disk3 /root/disk4
Role Based Authentication
List profiles for a user:
profiles -l user1
Create a new profile (local files, not LDAP):
profile -p ChangePasswords -S files > set desc="Allow changing of passwords" > set auth=solaris.passwd.assign,solaris.account.activate > info > verify > exit
Update a user to be assigned the new profile:
usermod +P ChangePasswords user1
Profiles are stored locally in:
- /etc/security/prof_attr
Zones
Oracle Docs:
Check zfs:
zfs list | grep zones
Configuring a zone:
root@npgs-solaris:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1> create create: Using system default template 'SYSdefault' zonecfg:zone1> set autoboot=true zonecfg:zone1> set bootargs="-m verbose" zonecfg:zone1> verify zonecfg:zone1> commit zonecfg:zone1> exit root@npgs-solaris:~#
List config:
root@npgs-solaris:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /system/zones/zone1 solaris excl
Install zone:
root@npgs-solaris:~# zoneadm -z zone1 install The following ZFS file system(s) have been created: rpool/VARSHARE/zones/zone1 Progress being logged to /var/log/zones/zoneadm.20181109T163221Z.zone1.install Image: Preparing at /system/zones/zone1/root. Install Log: /system/volatile/install.25403/install_log AI Manifest: /tmp/manifest.xml.5c4vcb SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.oracle.com/solaris/release/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 415/415 65388/65388 428.2/428.2 507k/s PHASE ITEMS Installing new actions 89400/89400 Updating package state database Done Updating package cache 0/0 Updating image state Done Creating fast lookup database Done Updating package cache 1/1 Installation: Succeeded done. Done: Installation completed in 1328.592 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /system/zones/zone1/root/var/log/zones/zoneadm.20181109T163221Z.zone1.install
Start the zone:
zoneadm -z zone1 boot
Login to the zone console (disconnect with ~.) and finish setup with UI:
zlogin -C zone1
Check status:
zoneadm list -v
Show config:
zonecfg -z zone1 info -a
Dedicated CPUs (set min 1, max 3) to a zone:
# zonecfg -z zone1 zonecfg:zone1> add dedicated-cpu zonecfg:zone1:dedicated-cpu> set ncpus=1-3 zonecfg:zone1:dedicated-cpu> end zonecfg:zone1> verify zonecfg:zone1> commit zonecfg:zone1> exit
("select" to enter a resource once it exists. "remove" to delete)
Set Memory cap:
zonecfg:zone1> add capped-memory zonecfg:zone1:capped-memory> set physical=512m zonecfg:zone1:capped-memory> set swap=1024m zonecfg:zone1:capped-memory> set locked=128m zonecfg:zone1:capped-memory> end
Set CPU cap (proportion guaranteed if there is contention), eg 50%:
# zonecfg -z zone1 zonecfg:zone1> add capped-cpu zonecfg:zone1:capped-cpu> set ncpus=0.5 zonecfg:zone1:capped-cpu> end