lkml.org 
[lkml]   [2023]   [Aug]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v8 03/12] ceph: handle idmapped mounts in create_request_message()
On Fri, Aug 4, 2023 at 5:24 AM Xiubo Li <xiubli@redhat.com> wrote:
>
>
> On 8/4/23 10:26, Xiubo Li wrote:
> >
> > On 8/3/23 21:59, Alexander Mikhalitsyn wrote:
> >> From: Christian Brauner <brauner@kernel.org>
> >>
> >> Inode operations that create a new filesystem object such as ->mknod,
> >> ->create, ->mkdir() and others don't take a {g,u}id argument explicitly.
> >> Instead the caller's fs{g,u}id is used for the {g,u}id of the new
> >> filesystem object.
> >>
> >> In order to ensure that the correct {g,u}id is used map the caller's
> >> fs{g,u}id for creation requests. This doesn't require complex changes.
> >> It suffices to pass in the relevant idmapping recorded in the request
> >> message. If this request message was triggered from an inode operation
> >> that creates filesystem objects it will have passed down the relevant
> >> idmaping. If this is a request message that was triggered from an inode
> >> operation that doens't need to take idmappings into account the initial
> >> idmapping is passed down which is an identity mapping.
> >>
> >> This change uses a new cephfs protocol extension
> >> CEPHFS_FEATURE_HAS_OWNER_UIDGID
> >> which adds two new fields (owner_{u,g}id) to the request head structure.
> >> So, we need to ensure that MDS supports it otherwise we need to fail
> >> any IO that comes through an idmapped mount because we can't process it
> >> in a proper way. MDS server without such an extension will use
> >> caller_{u,g}id
> >> fields to set a new inode owner UID/GID which is incorrect because
> >> caller_{u,g}id
> >> values are unmapped. At the same time we can't map these fields with an
> >> idmapping as it can break UID/GID-based permission checks logic on the
> >> MDS side. This problem was described with a lot of details at [1], [2].
> >>
> >> [1]
> >> https://lore.kernel.org/lkml/CAEivzxfw1fHO2TFA4dx3u23ZKK6Q+EThfzuibrhA3RKM=ZOYLg@mail.gmail.com/
> >> [2]
> >> https://lore.kernel.org/all/20220104140414.155198-3-brauner@kernel.org/
> >>
> >> https://github.com/ceph/ceph/pull/52575
> >> https://tracker.ceph.com/issues/62217
> >>
> >> Cc: Xiubo Li <xiubli@redhat.com>
> >> Cc: Jeff Layton <jlayton@kernel.org>
> >> Cc: Ilya Dryomov <idryomov@gmail.com>
> >> Cc: ceph-devel@vger.kernel.org
> >> Co-Developed-by: Alexander Mikhalitsyn
> >> <aleksandr.mikhalitsyn@canonical.com>
> >> Signed-off-by: Christian Brauner <brauner@kernel.org>
> >> Signed-off-by: Alexander Mikhalitsyn
> >> <aleksandr.mikhalitsyn@canonical.com>
> >> ---
> >> v7:
> >> - reworked to use two new fields for owner UID/GID
> >> (https://github.com/ceph/ceph/pull/52575)
> >> v8:
> >> - properly handled case when old MDS used with new kernel client
> >> ---
> >> fs/ceph/mds_client.c | 46 +++++++++++++++++++++++++++++++++---
> >> fs/ceph/mds_client.h | 5 +++-
> >> include/linux/ceph/ceph_fs.h | 4 +++-
> >> 3 files changed, 50 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
> >> index 8829f55103da..7d3106d3b726 100644
> >> --- a/fs/ceph/mds_client.c
> >> +++ b/fs/ceph/mds_client.c
> >> @@ -2902,6 +2902,17 @@ static void encode_mclientrequest_tail(void
> >> **p, const struct ceph_mds_request *
> >> }
> >> }
> >> +static inline u16 mds_supported_head_version(struct
> >> ceph_mds_session *session)
> >> +{
> >> + if (!test_bit(CEPHFS_FEATURE_32BITS_RETRY_FWD,
> >> &session->s_features))
> >> + return 1;
> >> +
> >> + if (!test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID,
> >> &session->s_features))
> >> + return 2;
> >> +
> >> + return CEPH_MDS_REQUEST_HEAD_VERSION;
> >> +}
> >> +
> >> static struct ceph_mds_request_head_legacy *
> >> find_legacy_request_head(void *p, u64 features)
> >> {
> >> @@ -2923,6 +2934,7 @@ static struct ceph_msg
> >> *create_request_message(struct ceph_mds_session *session,
> >> {
> >> int mds = session->s_mds;
> >> struct ceph_mds_client *mdsc = session->s_mdsc;
> >> + struct ceph_client *cl = mdsc->fsc->client;
> >> struct ceph_msg *msg;
> >> struct ceph_mds_request_head_legacy *lhead;
> >> const char *path1 = NULL;
> >> @@ -2936,7 +2948,7 @@ static struct ceph_msg
> >> *create_request_message(struct ceph_mds_session *session,
> >> void *p, *end;
> >> int ret;
> >> bool legacy = !(session->s_con.peer_features &
> >> CEPH_FEATURE_FS_BTIME);
> >> - bool old_version = !test_bit(CEPHFS_FEATURE_32BITS_RETRY_FWD,
> >> &session->s_features);
> >> + u16 request_head_version = mds_supported_head_version(session);
> >> ret = set_request_path_attr(mdsc, req->r_inode, req->r_dentry,
> >> req->r_parent, req->r_path1, req->r_ino1.ino,
> >> @@ -2977,8 +2989,10 @@ static struct ceph_msg
> >> *create_request_message(struct ceph_mds_session *session,
> >> */
> >> if (legacy)
> >> len = sizeof(struct ceph_mds_request_head_legacy);
> >> - else if (old_version)
> >> + else if (request_head_version == 1)
> >> len = sizeof(struct ceph_mds_request_head_old);
> >> + else if (request_head_version == 2)
> >> + len = offsetofend(struct ceph_mds_request_head, ext_num_fwd);
> >> else
> >> len = sizeof(struct ceph_mds_request_head);
> >
> > This is not what we suppose to. If we do this again and again when
> > adding new members it will make the code very complicated to maintain.
> >
> > Once the CEPHFS_FEATURE_32BITS_RETRY_FWD has been supported the ceph
> > should correctly decode it and if CEPHFS_FEATURE_HAS_OWNER_UIDGID is
> > not supported the decoder should skip it directly.
> >
> > Is the MDS side buggy ? Why you last version didn't work ?
> >
>
> I think the ceph side is buggy. Possibly we should add one new `length`
> member in struct `struct ceph_mds_request_head` and just skip the extra
> bytes when decoding it.

Hm, I think I found something suspicious. In cephfs code we have many
places that
call the DECODE_FINISH macro, but in our decoder we don't have it.

From documentation it follows that DECODE_FINISH purpose is precisely
about this problem.

What do you think?

>
> Could you fix it together with your ceph PR ?
>
> Thanks
>
> - Xiubo
>
>
> > Thanks
> >
> > - Xiubo
> >
> >> @@ -3028,6 +3042,16 @@ static struct ceph_msg
> >> *create_request_message(struct ceph_mds_session *session,
> >> lhead = find_legacy_request_head(msg->front.iov_base,
> >> session->s_con.peer_features);
> >> + if ((req->r_mnt_idmap != &nop_mnt_idmap) &&
> >> + !test_bit(CEPHFS_FEATURE_HAS_OWNER_UIDGID,
> >> &session->s_features)) {
> >> + pr_err_ratelimited_client(cl,
> >> + "idmapped mount is used and
> >> CEPHFS_FEATURE_HAS_OWNER_UIDGID"
> >> + " is not supported by MDS. Fail request with -EIO.\n");
> >> +
> >> + ret = -EIO;
> >> + goto out_err;
> >> + }
> >> +
> >> /*
> >> * The ceph_mds_request_head_legacy didn't contain a version
> >> field, and
> >> * one was added when we moved the message version from 3->4.
> >> @@ -3035,17 +3059,33 @@ static struct ceph_msg
> >> *create_request_message(struct ceph_mds_session *session,
> >> if (legacy) {
> >> msg->hdr.version = cpu_to_le16(3);
> >> p = msg->front.iov_base + sizeof(*lhead);
> >> - } else if (old_version) {
> >> + } else if (request_head_version == 1) {
> >> struct ceph_mds_request_head_old *ohead = msg->front.iov_base;
> >> msg->hdr.version = cpu_to_le16(4);
> >> ohead->version = cpu_to_le16(1);
> >> p = msg->front.iov_base + sizeof(*ohead);
> >> + } else if (request_head_version == 2) {
> >> + struct ceph_mds_request_head *nhead = msg->front.iov_base;
> >> +
> >> + msg->hdr.version = cpu_to_le16(6);
> >> + nhead->version = cpu_to_le16(2);
> >> +
> >> + p = msg->front.iov_base + offsetofend(struct
> >> ceph_mds_request_head, ext_num_fwd);
> >> } else {
> >> struct ceph_mds_request_head *nhead = msg->front.iov_base;
> >> + kuid_t owner_fsuid;
> >> + kgid_t owner_fsgid;
> >> msg->hdr.version = cpu_to_le16(6);
> >> nhead->version = cpu_to_le16(CEPH_MDS_REQUEST_HEAD_VERSION);
> >> +
> >> + owner_fsuid = from_vfsuid(req->r_mnt_idmap, &init_user_ns,
> >> + VFSUIDT_INIT(req->r_cred->fsuid));
> >> + owner_fsgid = from_vfsgid(req->r_mnt_idmap, &init_user_ns,
> >> + VFSGIDT_INIT(req->r_cred->fsgid));
> >> + nhead->owner_uid = cpu_to_le32(from_kuid(&init_user_ns,
> >> owner_fsuid));
> >> + nhead->owner_gid = cpu_to_le32(from_kgid(&init_user_ns,
> >> owner_fsgid));
> >> p = msg->front.iov_base + sizeof(*nhead);
> >> }
> >> diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h
> >> index e3bbf3ba8ee8..8f683e8203bd 100644
> >> --- a/fs/ceph/mds_client.h
> >> +++ b/fs/ceph/mds_client.h
> >> @@ -33,8 +33,10 @@ enum ceph_feature_type {
> >> CEPHFS_FEATURE_NOTIFY_SESSION_STATE,
> >> CEPHFS_FEATURE_OP_GETVXATTR,
> >> CEPHFS_FEATURE_32BITS_RETRY_FWD,
> >> + CEPHFS_FEATURE_NEW_SNAPREALM_INFO,
> >> + CEPHFS_FEATURE_HAS_OWNER_UIDGID,
> >> - CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_32BITS_RETRY_FWD,
> >> + CEPHFS_FEATURE_MAX = CEPHFS_FEATURE_HAS_OWNER_UIDGID,
> >> };
> >> #define CEPHFS_FEATURES_CLIENT_SUPPORTED { \
> >> @@ -49,6 +51,7 @@ enum ceph_feature_type {
> >> CEPHFS_FEATURE_NOTIFY_SESSION_STATE, \
> >> CEPHFS_FEATURE_OP_GETVXATTR, \
> >> CEPHFS_FEATURE_32BITS_RETRY_FWD, \
> >> + CEPHFS_FEATURE_HAS_OWNER_UIDGID, \
> >> }
> >> /*
> >> diff --git a/include/linux/ceph/ceph_fs.h b/include/linux/ceph/ceph_fs.h
> >> index 5f2301ee88bc..6eb83a51341c 100644
> >> --- a/include/linux/ceph/ceph_fs.h
> >> +++ b/include/linux/ceph/ceph_fs.h
> >> @@ -499,7 +499,7 @@ struct ceph_mds_request_head_legacy {
> >> union ceph_mds_request_args args;
> >> } __attribute__ ((packed));
> >> -#define CEPH_MDS_REQUEST_HEAD_VERSION 2
> >> +#define CEPH_MDS_REQUEST_HEAD_VERSION 3
> >> struct ceph_mds_request_head_old {
> >> __le16 version; /* struct version */
> >> @@ -530,6 +530,8 @@ struct ceph_mds_request_head {
> >> __le32 ext_num_retry; /* new count retry attempts */
> >> __le32 ext_num_fwd; /* new count fwd attempts */
> >> +
> >> + __le32 owner_uid, owner_gid; /* used for OPs which create
> >> inodes */
> >> } __attribute__ ((packed));
> >> /* cap/lease release record */
>

\
 
 \ /
  Last update: 2023-08-04 08:37    [W:0.116 / U:0.376 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site