On 2018-10-02 01:31 PM, Arce Moreno, Abraham wrote:
Thanks Michel,
Hi Abraham
I am taking your email as a base to understand from your experience more on the details to work the stories below.
I created four stories:
deployment/libvirt: sizing for reference configurations https://storyboard.openstack.org/#!/story/2003835 Currently this is our current libvirt configuration [0]
[ All-In-One]
- setup_allinone.sh - Controllers: 2 - vCPUs: 6 - Memory: 18 GB - Disks: - 0 600 GB - 1 200 GB - 2 200 GB - NICs: 4
[ Standard Controller ]
- setup_standard_controller.sh - Controllers: 2 - vCPUs: 4 - Memory: ~ 17 GB - Disks: - 0 200 GB - 1 200 GB - 4 NICs - Computes: 2 - vCPUs: 4 - Memory: ~ 17 GB - Disks - 0 200 GB - 1 200 GB - 4 NICs
Some questions:
- Do we need any changes to the above configurations?
There was a recent "stx.2018.10 test plan" email on the starlingx-discuss list, which I think would be useful for making this decision: do we need any changes to controller-storage, duplex or simplex configurations to support tests under virtualization. I wrote in the notes for this story "Prepare a list of reference test cases..." and then soon after (same day) that test plan email came out - so, free alignment "yay!". I want to shrink the memory footprint down for the October release. I think we could fit simplex under 16G, duplex under 32G and controller-storage under 64G at the very least. For example, previous experience (previous releases) put the controller-storage at 24G total for the four VMs.
- We are covered for Simplex, Duplex and Controller Storage configurations but Dedicated Storage is not identified as such, do we need to: 1. Create a specific setup_dedicated_controller.sh? 2. or should we reuse existing setup_standard_controller.sh passing a parameter to identify between Controller or Dedicated Storage?
I do not have a strong opinion. For example, currently you run setup_allinone.sh for both duplex and simplex. One ignores the second controller when running simplex. If you added storage xml to setup_dedicated_controller.sh I expect you'd be fine - one could ignore the storage VMs. (The script name would not be right.) In practice for previous releases I would give the controller VM the cpu, memory, disks, networks needed to support either configuration.
- Once we have these optimizations do we need to port them to our VirtualBox configurations?
(I did not observe anyone supporting VirtualBox, but... ) "yes", I would expect the methods to be aligned.
- In the story description, we mentioned: "Optimize cpu, memory, hardware emulation, etc for test coverage including..." Any more thoughts from your side on what the test coverage means? Any specific Things you want us to consider from a testing perspective? Does it involves things like tempest?
I would refer to the recent "stx.2018.10 test plan" email thread. Someone somewhere has started to think about which tests can run on hardware and which under virtualization. I am thinking about what needs to be there for those tests.
deployment/libvirt: virtual disk placement for reference configurations https://storyboard.openstack.org/#!/story/2003836 Can you please let us understand the use case for this story?
The host's root device can be overloaded with the host and 6 VMs running there. One needs to be careful to ensure the host's root disk is not overloaded when running automation, for example. But, I'm really looking forward to running multiple clusters on the host using the stx-tools implementation (which is another good reason to bring the memory foot print down). One requires more disks for that. It's the same story either way: configure where you want the virtual disks to be stored.
deployment/libvirt: tenant networking for reference configurations https://storyboard.openstack.org/#!/story/2003837 What do you think about this [1] reference configuration for tenant networking?
The series of commands appears eerily familiar, almost as if it was copy/pasted from our internal wiki (lol). Vlans 10 and 500 are outside of the 100-400 listed on the wikis, and providernet-b has no previous definition. I missed that script when I was testing the libvirt deployment method, otherwise I would have updated it as an example that matches the wiki, rather than carrying the attached script. M
[0] https://git.openstack.org/cgit/openstack/stx-tools/tree/deployment/libvirt [1] https://git.openstack.org/cgit/opnstack/stx-tools/tree/deployment/provision/...