[Starlingx-discuss] StarlingX feedback

Jones, Bruce E bruce.e.jones at intel.com
Fri Mar 20 15:52:33 UTC 2020


Ildiko, thank you for sharing this with the community.  Feedback is a gift, and this kind of feedback is especially helpful.  It tells us what we need to work on to improve both the software and its documentation.  If you can, please thank this user on behalf of our community.

        bruej

-----Original Message-----
From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] 
Sent: Friday, March 20, 2020 4:13 AM
To: starlingx <starlingx-discuss at lists.starlingx.io>
Subject: [Starlingx-discuss] StarlingX feedback

Hi StarlingX Community,

As you may already know we have a user survey continuously open to see interest and receive feedback from people who evaluate and deploy StarlingX.

I received a more detailed feedback through the survey recently that I would like to share with you below.


The person who provided the feedback has good experience with the Wind River Titanium platform and that is why he decided to look into StarlingX and try it out. He marked MEC as his primary use case in the survey.

With limited time and resources he had a small test environment to play with that included controller nodes on 2 VM's and 4 physical compute nodes as the following:

* Controllers:
  * 1x HPE DL360 Gen8  (1x12core CPU, 32GB RAM)
* Computes:
  * 2x HPE DL360 Gen9 (2x10core CPU, 128GB RAM)
  * 2x HPE DL380 Gen9 (2x18core CPU, 192GB RAM)

Starling X versions tested:
* 2.0
* 3.0

Installation experience:

As it is very similar to the Titanium platform the installation of controllers and computes went pretty smooth and there weren’t too many issues.
Most of the challenges came from controlling the containers in the new environment which was an expected experience. Due to the lack of documentation at that time about configuration steps the required steps were performed as experimental steps until finding the right ones.

Challenges:

The way how OpenStack networking was propagated over data networks turned out to be very problematic and caused hitting the wall a few times. In details you have to configure data networks over the platform management components, but then you have to set VLAN's you are going to use over data networks on the OpenStack side. This was not documented anywhere.

Once data network is set up it was also a challenge to get VM’s running and connecting to networks since computes would become unstable.

In summary he is not a fan of decoupling the platform parts from the OpenStack part.

Issues with stability:
- controllers going out of sync (maybe due to running them in VM's on the same host that was a bit limited with resources)
- computes loosing Openstack helm packages after reboot
- computes had random Neutron or Nova services crash when deploying VM’s

Overall summary:

At this point StarlingX doesn't feel stable enough for critical loads but there is a lot of potential in the platform.
It is good to be able to have 2 controller nodes instead of 4 as some other OpenStack deployments demand. Also architecture choices done under the hood are quite nice and simple which he liked.
Big plus is having no hassle with writing bunch of deployment templates.
Platform monitoring is great and patching also looks promising but he never got the time to test it.


The person who provided the feedback currently has very limited time to look into StarlingX but hoping to be able to get back to it in the future. In case you have follow up questions please provide that on this mail thread or if there’s a demand for it I will check if he would be available to join one of the community calls to follow up on the above.

Thanks and Best Regards,
Ildikó



_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss


More information about the Starlingx-discuss mailing list