[Starlingx-discuss] Stories for stx-tools deployment

Michel Thebeau michel.thebeau at windriver.com
Tue Oct 2 19:08:04 UTC 2018


On 2018-10-02 01:31 PM, Arce Moreno, Abraham wrote:
> Thanks Michel,


Hi Abraham


> I am taking your email as a base to understand from your experience more on the details
> to work the stories below.
>
>> I created four stories:
>>
>> deployment/libvirt: sizing for reference configurations
>>     https://storyboard.openstack.org/#!/story/2003835
> Currently this is our current libvirt configuration [0]
>
> [ All-In-One]
>
> - setup_allinone.sh
> - Controllers: 2
>    - vCPUs: 6
>    - Memory: 18 GB
>    - Disks:
>      - 0 600 GB
>      - 1 200 GB
>      - 2 200 GB
>    - NICs: 4
>
> [ Standard Controller ]
>
> - setup_standard_controller.sh
> - Controllers: 2
>    - vCPUs: 4
>    - Memory: ~ 17 GB
>    - Disks:
>      - 0 200 GB
>      - 1 200 GB
>    - 4 NICs
> - Computes: 2
>    - vCPUs: 4
>    - Memory: ~ 17 GB
>    - Disks
>      - 0 200 GB
>      - 1 200 GB
>    - 4 NICs
>
> Some questions:
>
> - Do we need any changes to the above configurations?


There was a recent "stx.2018.10 test plan" email on the 
starlingx-discuss list, which I think would be useful for making this 
decision: do we need any changes to controller-storage, duplex or 
simplex configurations to support tests under virtualization.  I wrote 
in the notes for this story "Prepare a list of reference test cases..." 
and then soon after (same day) that test plan email came out - so, free 
alignment "yay!".

I want to shrink the memory footprint down for the October release. I 
think we could fit simplex under 16G, duplex under 32G and 
controller-storage under 64G at the very least.  For example, previous 
experience (previous releases) put the controller-storage at 24G total 
for the four VMs.


> - We are covered for Simplex, Duplex and Controller Storage configurations
>    but Dedicated Storage is not identified as such, do we need to:
>    1. Create a specific setup_dedicated_controller.sh?
>    2. or should we reuse existing setup_standard_controller.sh passing a parameter
>         to identify between Controller or Dedicated Storage?


I do not have a strong opinion.  For example, currently you run 
setup_allinone.sh for both duplex and simplex.  One ignores the second 
controller when running simplex.  If you added storage xml to 
setup_dedicated_controller.sh I expect you'd be fine - one could ignore 
the storage VMs.  (The script name would not be right.)

In practice for previous releases I would give the controller VM the 
cpu, memory, disks, networks needed to support either configuration.


> - Once we have these optimizations do we need to port them to our VirtualBox
>    configurations?


(I did not observe anyone supporting VirtualBox, but... ) "yes", I would 
expect the methods to be aligned.


> - In the story description, we mentioned:
>    "Optimize cpu, memory, hardware emulation, etc for test coverage including..."
>    Any more thoughts from your side on what the test coverage means? Any specific
>    Things you want us to consider from a testing perspective? Does it involves things
>    like tempest?


I would refer to the recent "stx.2018.10 test plan" email thread. 
Someone somewhere has started to think about which tests can run on 
hardware and which under virtualization.  I am thinking about what needs 
to be there for those tests.


>
>> deployment/libvirt: virtual disk placement for reference configurations
>>     https://storyboard.openstack.org/#!/story/2003836
> Can you please let us understand the use case for this story?


The host's root device can be overloaded with the host and 6 VMs running 
there. One needs to be careful to ensure the host's root disk is not 
overloaded when running automation, for example.  But, I'm really 
looking forward to running multiple clusters on the host using the 
stx-tools implementation (which is another good reason to bring the 
memory foot print down).  One requires more disks for that. It's the 
same story either way: configure where you want the virtual disks to be 
stored.



>   
>> deployment/libvirt: tenant networking for reference configurations
>>     https://storyboard.openstack.org/#!/story/2003837
> What do you think about this [1] reference configuration for tenant networking?


The series of commands appears eerily familiar, almost as if it was 
copy/pasted from our internal wiki (lol).   Vlans 10 and 500 are outside 
of the 100-400 listed on the wikis, and providernet-b has no previous 
definition.

I missed that script when I was testing the libvirt deployment method, 
otherwise I would have updated it as an example that matches the wiki, 
rather than carrying the attached script.

M



> [0] https://git.openstack.org/cgit/openstack/stx-tools/tree/deployment/libvirt
> [1] https://git.openstack.org/cgit/opnstack/stx-tools/tree/deployment/provision/simplex_stage_2.sh
>

-------------- next part --------------
#PROVIDERNET=providernet-a
#neutron providernet-create ${PROVIDERNET} --type=vlan
#neutron providernet-range-create --name ${PROVIDERNET}-range1 --range 100-400 ${PROVIDERNET}

#system host-if-modify -p ${PROVIDERNET} -nt data compute-0 eth1000
#system host-cpu-modify compute-0 -f vswitch -p0 1

ADMINID=`openstack project list | grep admin | awk '{print $2}'`
neutron providernet-list
PHYSNET0="${PROVIDERNET}"
PUBLICNET='public-net0'
PRIVATENET='private-net0'
INTERNALNET='internal-net0'
EXTERNALNET='external-net0'
PUBLICSUBNET='public-subnet0'
PRIVATESUBNET='private-subnet0'
INTERNALSUBNET='internal-subnet0'
EXTERNALSUBNET='external-subnet0'
PUBLICROUTER='public-router0'
PRIVATEROUTER='private-router0'


neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=100 --router:external ${EXTERNALNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=400 ${PUBLICNET}
neutron net-create --tenant-id ${ADMINID} --provider:network_type=vlan --provider:physical_network=${PHYSNET0} --provider:segmentation_id=150 ${PRIVATENET}
neutron net-create --tenant-id ${ADMINID} ${INTERNALNET}

PUBLICNETID=`neutron net-list | grep ${PUBLICNET} | awk '{print $2}'`
PRIVATENETID=`neutron net-list | grep ${PRIVATENET} | awk '{print $2}'`
INTERNALNETID=`neutron net-list | grep ${INTERNALNET} | awk '{print $2}'`
EXTERNALNETID=`neutron net-list | grep ${EXTERNALNET} | awk '{print $2}'`

neutron subnet-create --tenant-id ${ADMINID} --name ${PUBLICSUBNET} ${PUBLICNET} 192.168.101.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${PRIVATESUBNET} ${PRIVATENET} 192.168.201.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${INTERNALSUBNET} --no-gateway  ${INTERNALNET} 10.10.0.0/24
neutron subnet-create --tenant-id ${ADMINID} --name ${EXTERNALSUBNET} --gateway 192.168.1.1 --disable-dhcp ${EXTERNALNET} 192.168.1.0/24

neutron router-create ${PUBLICROUTER}
neutron router-create ${PRIVATEROUTER}
PRIVATEROUTERID=`neutron router-list | grep ${PRIVATEROUTER} | awk '{print $2}'`
PUBLICROUTERID=`neutron router-list | grep ${PUBLICROUTER} | awk '{print $2}'`

neutron router-gateway-set --disable-snat ${PUBLICROUTERID} ${EXTERNALNETID}
neutron router-gateway-set --disable-snat ${PRIVATEROUTERID} ${EXTERNALNETID}
neutron router-interface-add ${PUBLICROUTER} ${PUBLICSUBNET}
neutron router-interface-add ${PRIVATEROUTER} ${PRIVATESUBNET}

###
#PROVIDERNET=providernet-a
#system host-if-modify -p ${PROVIDERNET} -nt data compute-1 \
#system host-cpu-modify compute-1  -f vswitch -p0 1



More information about the Starlingx-discuss mailing list