[PVE-User] PRoxmox and ceph with just 3 server.

Yannis Milios yannis.milios at gmail.com
Fri Aug 31 13:15:57 CEST 2018


This seems a good reading as well...
https://ceph.com/geen-categorie/ceph-osd-reweight/

On Fri, 31 Aug 2018 at 12:10, Eneko Lacunza <elacunza at binovo.es> wrote:

> You can do so from CLI:
>
> ceph osd crush reweight osd.N
>
>
> https://ceph.com/geen-categorie/difference-between-ceph-osd-reweight-and-ceph-osd-crush-reweight/
>
> El 31/08/18 a las 13:01, Gilberto Nunes escribió:
> > Thanks a lot for all this advice guys.
> > I still learn with Ceph.
> > So I have a doubt regarding how to change the weight from certain hdd
> > Is there some command to do that?
> >
> > Em sex, 31 de ago de 2018 05:58, Ronny Aasen <ronny+pve-user at aasen.cx>
> > escreveu:
> >
> >> when adding a older machine to your cluster, keep in mind that the
> >> slowest node with determine the overall speed of the ceph cluster (since
> >> a vm's disk will be spread all over)
> >>
> >>
> >> for RBD vm's you want low latency, so use things like
> >> nvram > ssd > hdd  with osd latency significant difference here.
> >>
> >> 100Gb/25Gb > 40Gb/10Gb (1Gb is useless in this case imho)
> >>
> >> as long as you have enough cores, higher ghz is better then lower ghz.
> >> due to lower latency
> >>
> >> kind regards.
> >> Ronny Aasen
> >>
> >>
> >>
> >> On 31. aug. 2018 00:21, Gilberto Nunes wrote:
> >>> An HPE Server will remain after deploy 3 servers with proxmox and ceph.
> >>> I thing I will use this HPE server as 4th node!
> >>>
> >>>
> >>> ---
> >>> Gilberto Nunes Ferreira
> >>>
> >>> (47) 3025-5907
> >>> (47) 99676-7530 - Whatsapp / Telegram
> >>>
> >>> Skype: gilberto.nunes36
> >>>
> >>>
> >>>
> >>>
> >>> 2018-08-30 18:16 GMT-03:00 Ronny Aasen <ronny+pve-user at aasen.cx>:
> >>>
> >>>> if HA is important, you should consider having a 4th ceph osd server
> >> (does
> >>>> not have to also be proxmox)
> >>>>
> >>>> with ceph's default of 3 replicas, that you will want to use in  a
> >>>> production setup, you do not have any failure domain.
> >>>> IOW the loss of any one node = a degraded ceph cluster.  if you have
> an
> >>>> additional node, ceph will rebalance and return to HEALTH_OK on the
> >> failure
> >>>> of a node.
> >>>>
> >>>> with vm's iops are important so you must keep latency to a minimum.
> >>>>
> >>>> both of these are explained a bit more in detail in the link he
> posted.
> >>>>
> >>>>
> >>>> kind regards
> >>>> Ronny Aasen
> >>>>
> >>>>
> >>>>
> >>>> On 30.08.2018 20:46, Gilberto Nunes wrote:
> >>>>
> >>>>> Hi Martin.
> >>>>>
> >>>>> Not really worried about highest performance, but to know if it will
> >> work
> >>>>> properly, mainly HA!
> >>>>> I plan work with mesh network too.
> >>>>>
> >>>>> Tanks a lot
> >>>>>
> >>>>> ---
> >>>>> Gilberto Nunes Ferreira
> >>>>>
> >>>>> (47) 3025-5907
> >>>>> (47) 99676-7530 - Whatsapp / Telegram
> >>>>>
> >>>>> Skype: gilberto.nunes36
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> 2018-08-30 15:40 GMT-03:00 Martin Maurer <martin at proxmox.com>:
> >>>>>
> >>>>> Hello,
> >>>>>> Not really. Please read in detail the following:
> >>>>>>
> >>>>>> https://forum.proxmox.com/threads/proxmox-ve-ceph-benchmark-
> >>>>>> 2018-02.41761/
> >>>>>>
> >>>>>>
> >>>>>> On 30.08.2018 16:47, Gilberto Nunes wrote:
> >>>>>>
> >>>>>> Hi there
> >>>>>>> It's possible create a scenario with 3 PowerEdge r540, with Proxmox
> >> and
> >>>>>>> Ceph.
> >>>>>>> The server has this configuration:
> >>>>>>>
> >>>>>>> 32 GB memory
> >>>>>>> SAS 2x 300 GB
> >>>>>>> SSD 1x 480 GB
> >>>>>>>
> >>>>>>> 2 VM with SQL and Windows server.
> >>>>>>>
> >>>>>>> Thanks
> >>>>>>>
> >>>>>>> ---
> >>>>>>> Gilberto Nunes Ferreira
> >>>>>>>
> >>>>>>> (47) 3025-5907
> >>>>>>> (47) 99676-7530 - Whatsapp / Telegram
> >>>>>>>
> >>>>>>> Skype: gilberto.nunes36
> >>>>>>> _______________________________________________
> >>>>>>> pve-user mailing list
> >>>>>>> pve-user at pve.proxmox.com
> >>>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>> Best Regards,
> >>>>>>
> >>>>>> Martin Maurer
> >>>>>>
> >>>>>> martin at proxmox.com
> >>>>>> http://www.proxmox.com
> >>>>>>
> >>>>>> ____________________________________________________________________
> >>>>>> Proxmox Server Solutions GmbH
> >>>>>> Bräuhausgasse 37, 1050 Vienna, Austria
> >>>>>> Commercial register no.: FN 258879 f
> >>>>>> Registration office: Handelsgericht Wien
> >>>>>>
> >>>>>> _______________________________________________
> >>>>>> pve-user mailing list
> >>>>>> pve-user at pve.proxmox.com
> >>>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>>>>
> >>>>>> _______________________________________________
> >>>>> pve-user mailing list
> >>>>> pve-user at pve.proxmox.com
> >>>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>>>
> >>>>
> >>>> _______________________________________________
> >>>> pve-user mailing list
> >>>> pve-user at pve.proxmox.com
> >>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>>
> >>> _______________________________________________
> >>> pve-user mailing list
> >>> pve-user at pve.proxmox.com
> >>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>>
> >> _______________________________________________
> >> pve-user mailing list
> >> pve-user at pve.proxmox.com
> >> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> >>
> > _______________________________________________
> > pve-user mailing list
> > pve-user at pve.proxmox.com
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>
>
> --
> Zuzendari Teknikoa / Director Técnico
> Binovo IT Human Project, S.L.
> Telf. 943569206
> Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa)
> www.binovo.es
>
> _______________________________________________
> pve-user mailing list
> pve-user at pve.proxmox.com
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
>



More information about the pve-user mailing list