Hi: Create a disk partition of the size, set 200G here system host-disk-partition-add controller-0 /dev/sda 200 [cid:image002.jpg@01D715C0.A6081290] Check the newly created partition and add it to vg system host-disk-partition-list controller-0 system host-pv-add controller-0 cgts-vg 3df6abcc-df58-46f3-b99f-02ff7504032f (uuid) [cid:image004.jpg@01D715C0.A6081290] [cid:image009.jpg@01D715C0.A6081290] From: Sun, Austin <austin.sun@intel.com> Sent: 2021年3月10日 15:02 To: chen.dq@neusoft.com; Chen, DongqiX <dongqix.chen@intel.com> Subject: FW: [Starlingx-discuss] HostFs update failed: Not enough free space From: Embedded Devel <lists@optimcloud.com<mailto:lists@optimcloud.com>> Sent: Wednesday, March 10, 2021 12:06 PM To: Sun, Austin <austin.sun@intel.com<mailto:austin.sun@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: Re: [Starlingx-discuss] HostFs update failed: Not enough free space where the hardware guide states Primary disk 500 GB SSD or NVMe (see Configure NVMe Drive as Primary Disk<https://docs.starlingx.io/deploy_install_guides/nvme_config.html>) Additional disks * 1 or more 500 GB (min. 10K RPM) for Ceph OSD * Recommended, but not required: 1 or more SSDs or NVMe drives for Ceph journals (min. 1024 MiB per OSD journal) * For OpenStack, recommend 1 or more 500 GB (min. 10K RPM) for VM local ephemeral storage both nodes have 3 1TB drives, 1TB primary, 2 1TB OSD so im curious why im even seeing the error [ 8.241106] sd 0:2:0:0: [sda] 1952448512 512-byte logical blocks: (1000 GB/931 GiB) [ 8.241427] sd 0:2:1:0: [sdb] 1952448512 512-byte logical blocks: (1000 GB/931 GiB) [ 8.249716] sd 0:2:2:0: [sdc] 1952448512 512-byte logical blocks: (1000 GB/931 GiB) node=controller-0;system host-pv-show $node $(system host-pv-list $node | grep cgts-vg | awk -F'|' '{print $2}') +--------------------------+-------------------------------------------------------+ | Property | Value | +--------------------------+-------------------------------------------------------+ | uuid | 8ce1afab-7eb6-460e-8f58-6d17fd904e1e | | pv_state | provisioned | | pv_type | partition | | disk_or_part_uuid | 99e5a6ef-ec84-4922-b053-d0ac19cf2b5b | | disk_or_part_device_node | /dev/sda5 | | disk_or_part_device_path | /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0-part5 | | lvm_pv_name | /dev/sda5 | | lvm_vg_name | cgts-vg | | lvm_pv_uuid | KZ3cFY-rNnu-t7cF-wguw-Z0ey-9uYQ-ufZLcj | | lvm_pv_size_gib | 163.968 | | lvm_pe_total | 5247 | | lvm_pe_alloced | 5210 | | ihost_uuid | 353c9099-43a6-47ce-a92a-56d382abde2f | | created_at | 2021-03-06T13:09:20.082017+00:00 | | updated_at | 2021-03-10T03:46:53.573286+00:00 | +--------------------------+-------------------------------------------------------+ [sysadmin@controller-1 ~(keystone_admin)]$ node=controller-1;system host-pv-show $node $(system host-pv-list $node | grep cgts-vg | awk -F'|' '{print $2}') +--------------------------+-------------------------------------------------------+ | Property | Value | +--------------------------+-------------------------------------------------------+ | uuid | 13071ae3-1636-41df-b114-b3ccb4c97fd5 | | pv_state | provisioned | | pv_type | partition | | disk_or_part_uuid | 120eb244-4c5c-4fcc-beba-9de368067ba6 | | disk_or_part_device_node | /dev/sda5 | | disk_or_part_device_path | /dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:0:0-part5 | | lvm_pv_name | /dev/sda5 | | lvm_vg_name | cgts-vg | | lvm_pv_uuid | 776vC0-7yEs-kHqB-73gv-mrOQ-q1j8-hjEcmR | | lvm_pv_size_gib | 163.968 | | lvm_pe_total | 5247 | | lvm_pe_alloced | 5210 | | ihost_uuid | 37fe69a1-80d8-436b-9765-93c901cb4c12 | | created_at | 2021-03-09T05:38:54.185540+00:00 | | updated_at | 2021-03-10T03:46:41.316696+00:00 | +--------------------------+-------------------------------------------------------+ [sysadmin@controller-1 ~(keystone_admin)]$ On 3/9/21 7:35 PM, Sun, Austin wrote: node=controller-0;system host-pv-show $node $(system host-pv-list $node | grep cgts-vg | awk -F'|' '{print $2}')