Arce,
For a minimal ceph config you will want 5 nodes.
- 2 controllers
- 2 computes
- 3 storage nodes
- Dual socket xeon, min 16 cores
- Min 64GB RAM
- 4 disks preferably SSD’s. 3 for OSD’s, 1 for the root disk
The CEPH monitors will run on the controllers and the 1st storage node to provide the quorum of 3.
Brent
-----Original Message-----
From: Arce Moreno, Abraham [
mailto:abraham.arce.moreno@intel.com]
Sent: Wednesday, September 26, 2018 12:25 PM
To: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>
Subject: [Starlingx-discuss] CEPH Based Testing Options
In today's "Weekly StarlingX non-OpenStack Distro" [0] there was a requirement for our validation activities around CEPH so we need your help to learn more about the specific hardware.
StarlingX allows a flexible storage options:
1. Linux Logical Volume Manager
2. CEPH
3. External SAN
[ CEPH : StarlingX Dedicated Storage ]
StarlingX system can be configured as Ceph-backend storage (Controller) and have available N number of Compute and Storage hosts, today we are deploying this as a "Dedicated Storage" configuration [1] :
- 2 Controller Nodes
- 2 Compute Nodes
- 2 Storage Nodes
[ CEPH : Homepage Documentation ]
Looking at the Hardware Recommendations from CEPH Documentation, they list N number of components including requirements for CPU, RAM and a bunch of storage devices (Hard Disks, SSDs) [2] StarlingX provides this provisioning automatically if "Dedicated
Storage" is configuration chosen.
[CEPH : Specific Hardware ]
Can you please guide us on what does that specific need makes it unique?
Is it a hardware? Is it a configuration? Is the mix of both?
1. Is StarlingX "Dedicated Storage" configuration one of the 2 possible CEPH based testing options?
2. Do we need a separate CEPH cluster provisioned including their own OSDs and monitors?
3. Any other?
We would appreciate any link to information and architecture to learn more about this request.
_______________________________________________
Starlingx-discuss mailing list