Rename inode pointers for better clarity. Move the d_instantiate call to
the end of the function to prevent other tasks from seeing it before
we've finished constructing it. Since we should have exclusive access to
the inode at this point, remove the spinlock around i_nlink update.
Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Now we walk though cifsFileInfo's list for every incoming lease
break and look for an equivalent there. That approach misses lease
breaks that come just after an open response - we don't have time
to populate new cifsFileInfo structure to the list. Fix this by
adding new list of pending opens and look for a lease there if we
didn't find it in the list of cifsFileInfo structures.
Signed-off-by: Pavel Shilovsky <pshilovsky@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
When we have a file opened with read oplock and we are writing a data
to this file, we need to store the data in the cache and then send to
the server to ensure that the next read operation will get a coherent
data.
Also mark it as CONFIG_CIFS_SMB2 because it's more suitable for SMB2
code but can fix some CIFS problems too (when server delays sending
an oplock break after a write request). We can drop this ifdefs
dependence in future.
Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Currently CIFS code accept read/write ops on mandatory locked area
when two processes use the same file descriptor - it's wrong.
Fix this by serializing io and brlock operations on the inode.
Signed-off-by: Pavel Shilovsky <pshilovsky@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
and allow several processes to walk through the lock list and read
can_cache_brlcks value if they are not going to modify them.
Signed-off-by: Pavel Shilovsky <pshilovsky@etersoft.ru>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Now we need to lock/unlock a spinlock while processing brlock ops
on the inode. Move brlocks of a fid to a separate list and attach
all such lists to the inode. This let us not hold a spinlock.
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Now that we aren't abusing the kmap address space, there's no need for
this lock or to impose a limit on the rsize.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Replace the "marshal_iov" function with a "read_into_pages" function.
That function will copy the read data off the socket and into the
pages array, kmapping and reading pages one at a time.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
We'll need an array to put into a smb_rqst, so convert this into an array
instead of (ab)using the lru list_head.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Eventually, we're going to want to append a list of pages to
cifs_readdata instead of a list of kvecs. To prepare for that, turn
the kvec array allocation into a separate one and just keep a
pointer to it in the readdata.
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Now that we're using TCP_CORK on the socket, there's no value in
continuting to support this option. Schedule it for removal in 3.9.
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Now that we're not kmapping so much at once, there's no need to cap
the wsize at the amount that can be simultaneously kmapped.
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
For now, none of the callers populate rq_pages. That will be done for
writes in a later patch. While we're at it, change the prototype of
setup_async_request not to need a return pointer argument. Just
return the pointer to the mid_q_entry or an ERR_PTR.
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Use the smb_send_rqst helper function to kmap each page in the array
and update the hash for that chunk.
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Add code that allows smb_send_rqst to send an array of pages after the
initial kvec array has been sent. For now, we simply kmap the page
array and send it using the standard smb_send_kvec function. Eventually,
we may want to convert this code to use kernel_sendpage under the hood
and avoid the kmap altogether for the page data.
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
We want to send SMBs as "atomically" as possible. Prior to sending any
data on the socket, cork it to make sure that no non-full frames go
out. Afterward, uncork it to make sure all of the data gets pushed out
to the wire.
Note that this more or less renders the socket=TCP_NODELAY mount option
obsolete. When TCP_CORK and TCP_NODELAY are used on the same socket,
TCP_NODELAY is essentially ignored.
Acked-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Again, just a change in the arguments and some function renaming here.
In later patches, we'll change this code to deal with page arrays.
In this patch, we add a new smb_send_rqst wrapper and have smb_sendv
call that. Then we move most of the existing smb_sendv code into a new
function -- smb_send_kvec. This seems a little redundant, but later
we'll flesh this out to deal with arrays of pages.
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
We need a way to represent a call to be sent on the wire that does not
require having all of the page data kmapped. Behold the smb_rqst struct.
This new struct represents an array of kvecs immediately followed by an
array of pages.
Convert the signing routines to use these structs under the hood and
turn the existing functions for this into wrappers around that. For now,
we're just changing these functions to take different args. Later, we'll
teach them how to deal with arrays of pages.
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <sfrench@us.ibm.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Use hmac-sha256 and rather than hmac-md5 that is used for CIFS/SMB.
Signature field in SMB2 header is 16 bytes instead of 8 bytes.
Automatically enable signing by client when requested by the server
when signing ability is available to the client.
Signed-off-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
Signed-off-by: Pavel Shilovsky <piastryyy@gmail.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Steve French <sfrench@us.ibm.com>