This message generated a parse failure. Raw output follows here. Please use 'back' to navigate. From devnull@lkml.org Tue Apr 30 23:58:14 2024 >From mailfetcher Sat May 19 03:06:00 2018 Envelope-to: lkml@grols.ch Delivery-date: Sat, 19 May 2018 03:06:00 +0200 Received: from srv.grols.ch [5.172.41.101] by 1eb738dfd482 with IMAP (fetchmail-6.3.26) for (single-drop); Sat, 19 May 2018 03:06:00 +0200 (CEST) Received: from vger.kernel.org ([209.132.180.67]) by home.grols.ch with esmtp (Exim 4.89) (envelope-from ) id 1fJqJr-0005AN-N7 for lkml@grols.ch; Sat, 19 May 2018 03:06:00 +0200 Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752237AbeESBFq (ORCPT ); Fri, 18 May 2018 21:05:46 -0400 Received: from p3plsmtpa08-10.prod.phx3.secureserver.net ([173.201.193.111]:52058 "EHLO p3plsmtpa08-10.prod.phx3.secureserver.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750957AbeESBFp (ORCPT , Steve French , "linux-cifs@vger.kernel.org" , "samba-technical@lists.samba.org" , "linux-kernel@vger.kernel.org" From: Tom Talpey Message-Id: Date: Fri, 18 May 2018 17:58:25 -0700 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 Mime-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit X-CMAE-Envelope: MS4wfGTSZtpi1OP/9wvjFW6mVj8RfAxJrvnjzzlmMsBDvnOGFD35qwoCFlt8Ndz8bea7zdN1E/Rky43uYUiq5W97rlt7vPhPikFxap97xjoPSuQ8zU6potTn bmI+ytT5q2zpVGrYWd/gamYjpqw3R9o6PFhZwNvRIwXARj5v0CzBjVBqTevGM9Vla84ipTF86uHfxfO41lMzmye5FPVksckkM2yw/VrFf5cqfh8oxwqD4Yng vXg2/QMjstU+ Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-Id: X-Mailing-List: linux-kernel@vger.kernel.org Received-SPF: none client-ip=209.132.180.67; envelope-from=linux-kernel-owner@vger.kernel.org; helo=vger.kernel.org X-Spam-Score: 4.7 X-Spam-Score-Bar: ++++ X-Spam-Action: no action X-Spam-Report: Action: no action Symbol: RCVD_COUNT_THREE(0.00) Symbol: RCPT_COUNT_FIVE(0.00) Symbol: MID_RHS_MATCH_FROM(0.00) Symbol: GREYLIST(0.00) Symbol: RCVD_IN_DNSWL_HI(0.00) Symbol: RCVD_NO_TLS_LAST(0.00) Symbol: R_SPF_NA(0.00) Symbol: FROM_NEQ_ENVFROM(0.00) Sym On 5/17/2018 11:03 PM, Long Li wrote: >> Subject: Re: [RFC PATCH 00/09] Implement direct user I/O interfaces for >> RDMA >> >> On 5/17/2018 8:22 PM, Long Li wrote: >>> From: Long Li >>> >>> This patchset implements direct user I/O through RDMA. >>> >>> In normal code path (even with cache=none), CIFS copies I/O data from >>> user-space to kernel-space for security reasons. >>> >>> With this patchset, a new mounting option is introduced to have CIFS >>> pin the user-space buffer into memory and performs I/O through RDMA. >>> This avoids memory copy, at the cost of added security risk. >> >> What's the security risk? This type of direct i/o behavior is not uncommon, >> and can certainly be made safe, using the appropriate memory registration >> and protection domains. Any risk needs to be stated explicitly, and mitigation >> provided, or at least described. > > I think it's an assumption that user-mode buffer can't be trusted, so CIFS always copies them into internal buffers, and calculate signature and encryption based on protocol used. > > With the direct buffer, the user can potentially modify the buffer when signature or encryption is in progress or after they are done. I don't agree that the legacy copying behavior is because the buffer is "untrusted". The buffer is the user's data, there's no trust issue here. If the user application modifies the buffer while it's being sent, it's a violation of the API contract, and the only victim is the application itself. Same applies for receiving data. And as pointed out, most all storage layers, file and block both, use this strategy for direct i/o. Regarding signing, if the application alters the data then the integrity hash will simply do its job and catch the application in the act. Again, nothing suffers but the application. Regarding encryption, I assume you're proposing to encrypt and decrypt the data in a kernel buffer, effectively a copy. So in fact, in the encryption case there's no need to pin and map the user buffer at all. I'll mention however that Windows takes the path of not performing RDMA placement when encrypting data. It saves nothing, and even adds some overhead, because of the need to touch the buffer anyway to manage the encryption/decryption. Bottom line - no security implication for using user buffers directly. Tom. > I also want to point out that, I choose to implement .read_iter and .write_iter from file_operations to implement direct I/O (CIFS is already doing this for O_DIRECT, so following this code path will avoid a big mess up). The ideal choice is to implement .direct_IO from address_space_operations that I think eventually we want to move to. > >> >> Tom. >> >>> >>> This patchset is RFC. The work is in progress, do not merge. >>> >>> >>> Long Li (9): >>> Introduce offset for the 1st page in data transfer structures >>> Change wdata alloc to support direct pages >>> Change rdata alloc to support direct pages >>> Change function to support offset when reading pages >>> Change RDMA send to regonize page offset in the 1st page >>> Change RDMA recv to support offset in the 1st page >>> Support page offset in memory regsitrations >>> Implement no-copy file I/O interfaces >>> Introduce cache=rdma moutning option >>> >>> >>> fs/cifs/cifs_fs_sb.h | 2 + >>> fs/cifs/cifsfs.c | 19 +++ >>> fs/cifs/cifsfs.h | 3 + >>> fs/cifs/cifsglob.h | 6 + >>> fs/cifs/cifsproto.h | 4 +- >>> fs/cifs/cifssmb.c | 10 +- >>> fs/cifs/connect.c | 13 +- >>> fs/cifs/dir.c | 5 + >>> fs/cifs/file.c | 351 >> ++++++++++++++++++++++++++++++++++++++++++---- >>> fs/cifs/inode.c | 4 +- >>> fs/cifs/smb2ops.c | 2 +- >>> fs/cifs/smb2pdu.c | 22 ++- >>> fs/cifs/smbdirect.c | 132 ++++++++++------- >>> fs/cifs/smbdirect.h | 2 +- >>> fs/read_write.c | 7 + >>> include/linux/ratelimit.h | 2 +- >>> 16 files changed, 489 insertions(+), 95 deletions(-) >>> > N�����r��y���b�X��ǧv�^�)޺{.n�+����{��ٚ�{ay�ʇڙ�,j��f���h���z��w��� ���j:+v���w�j�m��������zZ+�����ݢj"��!tml= >