sorry for the late reply On Wed, Apr 3, 2019 at 7:21 AM Curtis <serverascode@gmail.com> wrote:
On Mon, Apr 1, 2019 at 7:43 PM Arce Moreno, Abraham <abraham.arce.moreno@intel.com> wrote:
I added some points/questions inline.
Thanks Curtis for your time!
We are integrating this demo in our spare time to ramp up in cloud technologies and one of its imperatives is a working solution. It started as a use case proposal around unmanned aerial systems [0], then decided to avoid some of the complexity involved in flying the drones, and finally landed it as a use case around home automation / smart cities at the network edge.
First off, I'd like to let people know that we are planning on doing some kind of "edge" proof-of-concept with Packet.com resources, so perhaps the project you discuss could fit in with that. I'm sure we'll chat about it at some point here.
At the next TSC meeting we'll discuss how to get the packet projects off the ground, so feel free to attend. :)
Awesome! We will be paying attention to community communications about this topic.
This demo has currently integrated the following acceleration resources: - GPU - VPU (Movidius NCS)
I would not expect a USB device like the Movidius NCS to be available in most STX deployments, but maybe?
Maybe, Movidius NCS seems to be one of one those exploration paths to offload some workloads, and where budget could make a difference in comparison with FPGAs.
Oh for sure, cost effective. I see what you mean.
[ StarlingX Deployment ] [ Offload ] What would be the preferred way to deploy this use case proposal in StarlingX? We understand the following options are available including its preference:
1. Via Kubernetes (Not Preferred) 2. Via Virtual Machine (Preferred) 3. Via Bare Metal (Preferred)
We should support either of them in my opinion, your project is a great example of cases where potential user in the future might not want to run their container apps in a vm I can take the AR of test the boundaries of what are the needs in kernel space for the latest kernel that other distributions provide. Maybe having an alternative latest LTS kernel for centos might not be that crazy after this kind of approach . Regards
Are the above options and their preference, correct? If not, can you please give us some hints behind your answer.
From my standpoint, I think #3 would be the least common option. #2 would be a good place to start, but I don't think #1 is "not preferred", I guess it depends on where these preferences are coming from.
Understood, we think it is worth to try option 2 initially at least for the core applications of the use case.
[ StarlingX Deployment ] [ Provisioning ]
As mentioned at the beginning, another of our imperatives, is to exercise zero touch provisioning.
Does it makes sense to split the provisioning in 2 parts based in the required time for the demo components to live?
- The core applications 100% uptime - Services on demand / 100 uptime in some cases
By zero touch provisioning do you just mean automation using IaaS APIs? eg. the docker compose file you link to? Or something else?
We understand the term from its definition but that "something else" is not in our knowledge yet. From our current understanding, that zero touch provisioning will allow us to deploy with one single instruction:
- The core applications part of the use case (e.g. access to the different dashboards) - The services part of the use case: the start and stop of X service (e.g. face recognition, object recognition, etc.) for each of the wanted video streams.
We will appreciate if you can share any online resource where we can learn more about this zero touch concept in a practical way (e.g. whitepaper, use case) so we can land into our use case.
I'm interested in "zero touch" and I'll be doing some research over the next while. This is also potentially something that can benefit stx. This is just me talking, but I think there is a difference between zero touch and automation. To me the canonical example of ZT would be turning on a device, typically physical, and that device starts up, registers, and then is scheduled and takes on some kind of personality for whatever workload is scheduled to it, all without any human intervention.
Manually initiating an automation workflow, like say a docker compose run, doesn't feel like ZT to me, but again I'm still working to define it for myself. :)
Again, thank you Curtis for your time and help to answer our questions.
No thank you, I think this is great. :)
Are you going to be doing your work in the public, like in a public git repo?
Thanks, Curtis
-- Blog: serverascode.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss