[PVE-User] Ceph - PG expected clone missing

Alwin Antreich a.antreich at proxmox.com
Mon Feb 19 16:01:44 CET 2018


Hi Karsten,

On Mon, Feb 19, 2018 at 02:36:41PM +0100, Karsten Becker wrote:
> Hi,
>
> I have one damaged PG in my Ceph cluster. All OSDs are BlueStore. How do I
> fix this?
>
>
> > 2018-02-19 14:30:24.371058 mon.0 [ERR] overall HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
> > 2018-02-19 14:30:37.733236 mon.0 [ERR] Health check update: Possible data damage: 1 pg inconsistent, 1 pg repair (PG_DAMAGED)
> > 2018-02-19 14:31:24.371286 mon.0 [ERR] overall HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent, 1 pg repair
> > 2018-02-19 14:31:23.281772 osd.29 [ERR] repair 10.7b9 10:9defb021:::rbd_data.2313975238e1f29.000000000002cbb5:head expected clone 10:9defb021:::rbd_data.2313975238e1f29.000000000002cbb5:64e 1 missing
> > 2018-02-19 14:31:23.281784 osd.29 [INF] repair 10.7b9 10:9defb021:::rbd_data.2313975238e1f29.000000000002cbb5:head 1 missing clone(s)
> > 2018-02-19 14:32:05.166591 osd.29 [ERR] 10.7b9 repair 1 errors, 0 fixed
> > 2018-02-19 14:32:05.580906 mon.0 [ERR] Health check update: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
>
>
> "ceph pg repair 10.7b9" fails and is not able to fix ist. A manually
> started scrub "ceph pg scrub 10.7b9" also.
>
> size=3 min_size=2... if it's of interest.
>
> Any help appreciated.
>
> Best from Berlin/Germany
> Karsten
>
Check your osd.29, the disk may be faulty.

Can you see more in the log of the osd.29?

--
Cheers,
Alwin




More information about the pve-user mailing list