开个冷门楼,ppc64le工作站,talos II/debian/sway
本帖最后由 AraTurambar 于 2025-9-19 21:58 编辑IBM搞OpenPOWER却惨遭Google背刺所以没下文了的副产品,RaptorCS的Power9工作站Talos II。从ISA以上全部开源,仅剩的就是GPU、网卡和SATA固件。
在Intel ME/AMD PSP/ARM TZ主导的世界里高性能且无ring -3只此一家别无分号。虽然大概没多少人玩这个但是坑很多,万一有人也用得上所以暂且记录一下。
fastfetch镇楼。
_,met$$$$$gg. ara@talos
,g$$$$$$$$$$$$$$$P. ---------
,g$$P"" """Y$$.". OS: Debian GNU/Linux 13 (trixie) ppc64le
,$$P' `$$$. Host: T2P9S01 REV 1.01
',$$P ,ggs. `$$b: Kernel: Linux 6.12.43+deb13-powerpc64le
`d$$' ,$P"' . $$$ Uptime: 1 hour, 4 mins
$$P d$' , $$P Packages: 1131 (dpkg)
$$: $$. - ,d$$' Shell: bash 5.2.37
$$; Y$b._ _,d$P' Display (DELL U2718Q): 3840x2160 @ 60 Hz (as 1920x1080) in 28"
Y$$. `.`"Y$$$$P"' Display (DELL U2720Q): 3840x2160 @ 60 Hz (as 1920x1080) in 27"
`$$b "-.__ Display (DELL U2720Q): 3840x2160 @ 60 Hz (as 1920x1080) in 27"
`Y$$b WM: Sway 1.10.1 (Wayland)
`Y$$. Cursor: Adwaita
`$$b. Terminal: foot 1.21.0
`Y$$b. Terminal Font: monospace (8pt)
`"Y$b._ CPU: POWER9 (raw), altivec supported (88) @ 3.80 GHz
`"""" GPU: AMD FirePro W5100
Memory: 3.56 GiB / 251.36 GiB (1%)
Swap: 0 B / 178.80 GiB (0%)
Disk (/): 5.56 GiB / 3.23 TiB (0%) - ext4
Locale: en_US.UTF-8 本帖最后由 AraTurambar 于 2025-9-19 21:52 编辑
虽然一般用WX7100,但是我选择的显卡是FirePro W5100,原因是Cape Verde是AMD引进SMU固件和HDCP 2.2之前的最后一代,也是能用amdgpu的第一代。
在这一代里面W5100是显存足够(4G)、DP槽足够(4个1.2)、功率够低(50W)的甜品区域,经过适当配置可以驱动wayland多4k@60hz屏幕。
反复尝试之后可运行的配置:
GRUB_CMDLINE_LINUX="radeon.si_support=0 radeon.cik_support=0 amdgpu.si_support=1 amdgpu.cik_support=1 amdgpu.dpm=1 video=offb:off console=tty1"
手动禁radeon,启用amdgpu。console=tty1是因为如果用了LUKS,硬盘解密屏幕会跑到串口去,所以锁在tty1才能看见。
`/etc/modprobe.d/amdgpu.conf`:
# Blacklist radeon driver
blacklist radeon
# Force amdgpu to load for Cape Verde (W5100)
options amdgpu si_support=1
options amdgpu cik_support=1
惯例刷新启动,虽然是废话但是别忘记。
sudo update-initramfs -u -k all
sudo update-grub
sudo reboot
有了amdgpu驱动,sway仍然会因为DPMS撞墙,所以需要下面的配置。
# ~/.config/sway/config
# Disable DPMS to prevent display sleep issues
exec swaymsg "output * dpms off"
exec swaymsg "output * power on"
这样就能实现基本最小化闭源固件的情况下wayland+多屏幕了。 记录一下硬件选择。2025年的今天基本可以随便捡企业垃圾了,除了主板全世界RCS独此一家别无分号(IBM马甲公司lol)。
CPU:DD2.0的企业垃圾,22核750美金左右。散热器只能用RCS的,power8垃圾都对不上,要200美金左右。自己绑92mm风扇两个上去。
内存:DDR4 RDIMM。128G的三星服务器垃圾大概250美金左右,ebay随便捡。
存储卡:官方不推荐用板载sas(不安全也不干净,还要加钱),标准LSi解决方案即可,直接买ip模式的卡就行。 本帖最后由 AraTurambar 于 2025-9-19 22:04 编辑
Petitboot是Talos II的大雷点,当初为什么不直接GRUB不过现在说什么也没用了。
个人经验是不要reboot,关机再打开就行。万一卡在哪里就只能串口或者进BMC自救了。
BMC砖了不慌,看说明书用jumper重置一下即可,一般很少真的BMC固件挂。
玩这个一定要自备Intel/TDK规格的9pin转db9转接卡,已知Supermicro CBL-0010L可用。
必备,因为背板上的串口是接系统的不是接BMC的!砖了进这个串口没用,必须要从主板9pin接出来的DB9才能看到BMC。 浏览器:
ppc64le下firefox js性能极差,必须用jit网页的话只能chromium或者自己编译ungoogled-chromium,其他chrome系浏览器不存在ppc64le版本。
但是刷s1这种js很轻的discuz论坛还是够用的。 文件管理器:ppc64le应该包括各种主流文件管理器。要轻型那就pcmanfm吧,有的。Thunar应该也是有的。
密码:keepasxc和otpclient都在ppc64le源里。
Flatpak:sway下使用似乎不容易,和启动器不完全兼容,有待研究。
编辑器:Emacs-pgtk,否则会降级回命令行。
输入法:fcitx5,中文pinyin日文anthy,ppc64le没有sunpinyin。 唔,我在ibm搞了十来年测试,
ppc64是挺讨厌的一个平台,我不搞开发不大懂,里面jvm我们产品是特别附加了个的,不像别的平台是直接用的ibm的jdk。
我估摸着是版权问题。
当然咋讨厌都不如z,一开始还有纯z的产品,后来只是兼容zlinux。 子虚乌有 发表于 2025-9-20 17:28
唔,我在ibm搞了十来年测试,
ppc64是挺讨厌的一个平台,我不搞开发不大懂,里面jvm我们产品是特别附加了个 ...
可能因为ppc64本身是big endian?后来加了little endian模式,所以我写的是ppc64le。
Big endian那估计完全啥都没法用了。 记录一下zfs自动加载设置。
# Create secure directory
sudo mkdir -p /etc/zfs/keys
sudo chmod 700 /etc/zfs/keys
# Store your key
echo "your-passphrase" | sudo tee /etc/zfs/keys/pool.key
sudo chmod 600 /etc/zfs/keys/pool.key
/etc/systemd/system/zfs-load-keys.service
Description=Load ZFS encryption keys and mount datasets
After=zfs-import.target
Before=zfs-mount.service
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/zfs-load-keys.sh
WantedBy=zfs.target
/usr/local/bin/zfs-load-keys.sh
#!/bin/bash
# Auto-load ZFS keys and mount datasets
echo "Loading ZFS encryption keys..."
# Load keys for your datasets
zfs load-key -L file:///etc/zfs/keys/pool.key pool/dataset1
zfs load-key -L file:///etc/zfs/keys/pool.key pool/dataset2
# Mount all datasets
echo "Mounting ZFS datasets..."
zfs mount -a
# Verify mounts
echo "Mounted datasets:"
zfs mount | grep pool
exit 0
sudo chmod +x /usr/local/bin/zfs-load-keys.sh
sudo systemctl daemon-reload
sudo systemctl enable zfs-load-keys.service
sudo systemctl start zfs-load-keys.service 记录一下Restic备份。
/etc/restic/restic-env.sh
#!/bin/bash
# Restic Multi-Dataset Configuration
# S3-compatible storage credentials
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
# Cache directory
export RESTIC_CACHE_DIR="$HOME/.cache/restic"
# Function to set repository based on dataset
set_restic_repo() {
local dataset=$1
case $dataset in
dataset1)
export RESTIC_REPOSITORY="s3:s3.provider.com/bucket/dataset1"
export RESTIC_PASSWORD_FILE="/etc/restic/passwords/dataset1.key"
;;
dataset2)
export RESTIC_REPOSITORY="s3:s3.provider.com/bucket/dataset2"
export RESTIC_PASSWORD_FILE="/etc/restic/passwords/dataset2.key"
;;
*)
echo "Unknown dataset: $dataset"
return 1
;;
esac
echo "Repository set to: $RESTIC_REPOSITORY"
}
/usr/local/bin/restic-backup-zfs.sh
#!/bin/bash
# Restic Backup Script for ZFS Datasets
set -euo pipefail
DATASET_NAME=$1
ZFS_DATASET="pool/${DATASET_NAME}"
# Load environment
source /etc/restic/restic-env.sh
set_restic_repo "$DATASET_NAME" || exit 1
# Create ZFS snapshot
SNAPSHOT_NAME="${ZFS_DATASET}@restic-$(date +%Y%m%d-%H%M%S)"
zfs snapshot "$SNAPSHOT_NAME"
# Get snapshot path
DATASET_MOUNT=$(zfs get -H -o value mountpoint "$ZFS_DATASET")
SNAPSHOT_PATH="${DATASET_MOUNT}/.zfs/snapshot/$(basename $SNAPSHOT_NAME)"
# Perform backup
restic backup "$SNAPSHOT_PATH" \
--tag "zfs" \
--tag "$DATASET_NAME" \
--exclude-caches \
--one-file-system
BACKUP_STATUS=$?
# Cleanup
if [ $BACKUP_STATUS -eq 0 ]; then
zfs destroy "$SNAPSHOT_NAME"
else
echo "Backup failed. Keeping snapshot: $SNAPSHOT_NAME"
fi
exit $BACKUP_STATUS
# Quick structural check
restic check
# Check 5% of data randomly
restic check --read-data-subset=5%
# Full data verification (use sparingly)
restic check --read-data ZFS数据集设置。
# Create the dataset
sudo zfs create pool_name/musics
# Set recordsize to 128K (good for media files)
sudo zfs set recordsize=128k pool_name/musics
# Disable access time updates (reduces unnecessary writes)
sudo zfs set atime=off pool_name/musics
# Set compression (though music files are usually already compressed)
sudo zfs set compression=lz4 pool_name/musics
sudo zfs create pool/documents
sudo zfs set recordsize=64k pool/documents
sudo zfs set compression=zstd pool/documents
sudo zfs set atime=off pool/documents
sudo zfs create pool/archives
sudo zfs set recordsize=1M pool/archives
sudo zfs set compression=off pool/archives# Already compressed
sudo zfs set atime=off pool/archives
sudo zfs create pool/videos
sudo zfs set recordsize=1M pool/videos
sudo zfs set compression=off pool/videos# Video is already compressed
sudo zfs set atime=off pool/videos
sudo zfs set primarycache=metadata pool/videos# Save ARC for other data
sudo zfs create pool/pictures
sudo zfs set recordsize=128k pool/pictures
sudo zfs set compression=lz4 pool/pictures# Minimal overhead, helps with metadata
sudo zfs set atime=off pool/pictures
sudo zfs create pool/ebooks
sudo zfs set recordsize=64k pool/ebooks
sudo zfs set compression=zstd pool/ebooks# Great compression for text
sudo zfs set atime=off pool/ebooks
sudo zfs create pool/downloads
sudo zfs set recordsize=1M pool/downloads
sudo zfs set compression=lz4 pool/downloads# ISOs may compress, ZIMs won't
sudo zfs set atime=off pool/downloads
sudo zfs create pool/photos
sudo zfs set recordsize=1M pool/photos# RAW files are large
sudo zfs set compression=zstd pool/photos# RAW files compress well
sudo zfs set atime=off pool/photos
sudo zfs set copies=2 pool/photos# Extra redundancy for irreplaceable photos
sudo zfs create pool/dump
sudo zfs set recordsize=128k pool/dump
sudo zfs set compression=lz4 pool/dump
sudo zfs set atime=off pool/dump
sudo zfs set dedup=on pool/dump# Needs lots of RAM!
# Set ownership if needed
sudo chown -R username:username /pool_name/musics
页:
[1]