summaryrefslogtreecommitdiff
path: root/sys/nfs
AgeCommit message (Collapse)Author
2023-04-26Don't redeclare s, it's already there.Bob Beck
noticed by miod@ ok kettenis@
2023-04-26Fix missing splbio() needed in nfsBob Beck
ok claudio@ kettenis@
2023-03-08Delete obsolete /* ARGSUSED */ lint comments.Philip Guenther
ok miod@ millert@
2022-08-13Introduce the pru_*() wrappers for corresponding (*pr_usrreq)() calls.Vitaliy Makkoveev
This is helpful for the following (*pr_usrreq)() split to multiple handlers. But right now this makes code more readable. Also add '#ifndef _SYS_SOCKETVAR_H_' to sys/socketvar.h. This prevents the collisions when both sys/protosw.h and sys/socketvar.h are included together. Both 'socket' and 'protosw' structures are required to be defined before pru_*() wrappers, so we need to include sys/socketvar.h to sys/protosw.h. ok bluhm@
2022-08-12Put more struct vnode fields under splbio().Visa Hankala
Buffer cache related struct vnode fields can be accessed in interrupt context. Be more consistent with the use of splbio(). OK mpi@
2022-06-27Fix lock order reversal in nfs_inactive()Visa Hankala
Make the silly file removal happen after the vnode has been unlocked. This avoids a file-directory reversal in the vnode locking order. OK jca@
2022-06-26Remove unused VOP_POLL().Visa Hankala
OK mpi@
2022-06-06Simplify solock() and sounlock(). There is no reason to return a valueClaudio Jeker
for the lock operation and to pass a value to the unlock operation. sofree() still needs an extra flag to know if sounlock() should be called or not. But sofree() is called less often and mostly without keeping the lock. OK mpi@ mvs@
2022-05-27Call uvm_vnp_uncache() before VOP_RENAME().Martin Pieuchot
ok kettenis@
2022-05-22Lock kernel in nfsrv_rcv() because NFS subsystem is not MP-safe yet.Visa Hankala
Tested in snaps for a week. OK bluhm@
2022-03-17Use the refcnt API with struct ucred.Visa Hankala
OK bluhm@
2022-03-05remove fddi leftoverJonathan Gray
no binary change
2022-02-22Delete unnecessary #includes of <sys/domain.h> and/or <sys/protosw.h>Philip Guenther
net/if_pppx.c pointed out by jsg@ ok gnezdo@ deraadt@ jsg@ mpi@ millert@
2022-01-12fixup previous refactoringmbuhl
OK stsp@ (without assuming any responsibility for NFS)
2022-01-11spellingJonathan Gray
ok jmc@
2021-12-12Add vnode parameter to VOP_STRATEGY()Visa Hankala
Pass the device vnode as a parameter to VOP_STRATEGY() to allow calling the correct vop_strategy callback. Now the vnode is also available in the callback. OK mpi@
2021-12-11Clarify usage of __EV_POLL and __EV_SELECTVisa Hankala
Make __EV_POLL specific to kqueue-based poll(2), to remove overlap with __EV_SELECT that only select(2) uses. OK millert@ mpi@
2021-10-20revert vnode: remove VLOCKSWORK and check locking when vop_islocked != nullopSebastien Marie
(both kernel and userland bits) GENERIC + VFSLCKDEBUG is broken with it.
2021-10-19vnode: remove VLOCKSWORK and check locking when vop_islocked != nullopSebastien Marie
This flag is currently used to mark or unmark a vnode to actively check vnode locking semantic (when compiled with VFSLCKDEBUG). Currently, VLOCKSWORK flag isn't properly set for several FS implementation which have full locking support. This commit enable proper checking for them too (cd9660, udf, fuse, msdosfs, tmpfs). Instead of using a particular flag, it directly check if v_op->vop_islocked is nullop or not to activate or not the vnode locking checks. ok mpi@
2021-10-19vnode: do not manipulate vnode lock directlySebastien Marie
use VOP_LOCK / VOP_UNLOCK wrappers. VOP_LOCK() is prefered over vn_lock() here in order to keep equivalent code. ok mpi@ visa@ (as part of larger diff)
2021-10-02vfs: merge *_badop to vop_generic_badopSebastien Marie
It replaces spec_badop, fifo_badop, dead_badop and mfs_badop, which are only calls to panic(9), to one unique function vop_generic_badop(). No intented behaviour changes (outside the panic message which isn't the same). ok mpi@
2021-03-11spellingJonathan Gray
2021-01-19nfs/nfs_boot.c: convert ifunit() to if_unit(9)mvs
ok dlg@
2021-01-02nfs: don't sleep on lboltcheloha
We can simulate the current behavior without lbolt by sleeping for 1 second on the &nowake channel. ok mpi@
2020-12-25Refactor klist insertion and removalVisa Hankala
Rename klist_{insert,remove}() to klist_{insert,remove}_locked(). These functions assume that the caller has locked the klist. The current state of locking remains intact because the kernel lock is still used with all klists. Add new functions klist_insert() and klist_remove() that lock the klist internally. This allows some code simplification. OK mpi@
2020-09-27In the previous commit, check tv_nsec, not tv_sec as VNOVAL is aMatthieu Herrb
valid valuse of tv_sec but an invalid value for tv_nsec. Noticed by guenther@. ok beck@ deraadt@
2020-09-27nfs_create: after an exclusive create rpc, make sure to updateMatthieu Herrb
timestamps. This issue was iscovered after rsync 3.2 changed behaviour on an NFS mounted partition.. Change lifted from NetBSD (r 1.204). ok beck@, kn@, deraadt@
2020-08-24According the code `nfsbootdevname' is always set to network device namemvs
we expected. Remove the `else' path from nfs_boot_init(). If `nfsbootdevname' is not set something goes wrong and this is the panic condition. Also we exclude the case where we can get `ifp' which we don't expect. OK mpi@
2020-06-24kernel: use gettime(9)/getuptime(9) in lieu of time_second(9)/time_uptime(9)cheloha
time_second(9) and time_uptime(9) are widely used in the kernel to quickly get the system UTC or system uptime as a time_t. However, time_t is 64-bit everywhere, so it is not generally safe to use them on 32-bit platforms: you have a split-read problem if your hardware cannot perform atomic 64-bit reads. This patch replaces time_second(9) with gettime(9), a safer successor interface, throughout the kernel. Similarly, time_uptime(9) is replaced with getuptime(9). There is a performance cost on 32-bit platforms in exchange for eliminating the split-read problem: instead of two register reads you now have a lockless read loop to pull the values from the timehands. This is really not *too* bad in the grand scheme of things, but compared to what we were doing before it is several times slower. There is no performance cost on 64-bit (__LP64__) platforms. With input from visa@, dlg@, and tedu@. Several bugs squashed by visa@. ok kettenis@
2020-06-11Rename poll-compatibility flag to better reflect what it is.Martin Pieuchot
While here prefix kernel-only EV flags with two underbars. Suggested by kettenis@, ok visa@
2020-06-08Use a new EV_OLDAPI flag to match the behavior of poll(2) and select(2).Martin Pieuchot
Adapt FS kqfilters to always return true when the flag is set and bypass the polling mechanism of the NFS thread. While here implement a write filter for NFS. ok visa@
2020-04-07Abstract the head of knote lists. This allows extending the lists,Visa Hankala
for example, with locking assertions. OK mpi@, anton@
2020-02-20Replace field f_isfd with field f_flags in struct filterops to allowVisa Hankala
adding more filter properties without cluttering the struct. OK mpi@, anton@
2020-01-21sys/nfs: misc. tsleep(9) -> tsleep_nsec(9); ok mpi@cheloha
2020-01-20struct vops is not modified during runtime so use const which moves eachClaudio Jeker
into read-only data segment. OK deraadt@ tedu@
2020-01-15Keep socket timeout intervals in nsecs and use them with tsleep_nsec(9).Martin Pieuchot
Introduce and use TIMEVAL_TO_NSEC() to convert SO_RCVTIMEO/SO_SNDTIMEO specified values into nanoseconds. As a side effect it is now possible to specify a timeout larger that (USHRT_MAX / 100) seconds. To keep code simple `so_linger' now represents a number of seconds with 0 meaning no timeout or 'infinity'. Yes, the 0 -> INFSLP API change makes conversions complicated as many timeout holders are still memset()'d. Inputs from cheloha@ and bluhm@, ok bluhm@
2020-01-14In nfs_clearcommit() the loops over mnt_vnodelist and v_dirtyblkhdAlexander Bluhm
do not delete anything. So the safe variant of foreach is not necessary. OK mpi@ millert@ tedu@
2020-01-10Convert the vnode list at the mount point into a tailq. DuringAlexander Bluhm
unmount this list is traversed and the dirty vnodes are flushed to disk. Forced unmount expects that the list is empty after flushing, otherwise the kernel panics with "dangling vnode". As the write to disk can sleep, new vnodes may be inserted. If softdep is enabled, resolving the dependencies creates new dirty vnodes and inserts them to the list. To fix the panic, let insmntque() insert new vnodes at the tail of the list. Then vflush() will still catch them while traversing the list in forward direction. OK tedu@ millert@ visa@
2020-01-08Convert infinite sleeps to tsleep_nsec(9).Martin Pieuchot
ok bluhm@
2019-12-31Use C99 designated initializers with struct filterops. In addition,Visa Hankala
make the structs const so that the data are put in .rodata. OK mpi@, deraadt@, anton@, bluhm@
2019-12-26Convert struct vfsops initializer to C99 style.Alexander Bluhm
OK visa@
2019-12-25Use FOREACH macro to iterate over mnt_vnodelist.Alexander Bluhm
OK millert@ visa@ benno@
2019-12-05Convert infinite sleeps to tsleep_nsec(9).Martin Pieuchot
ok jca@
2019-08-05Allow concurrent reads of the f_offset field of struct file byanton
serializing both read/write operations using the existing file mutex. The vnode lock still grants exclusive write access to the offset; the mutex is only used to make the actual write atomic and prevent any concurrent reader from observing intermediate values. ok mpi@ visa@
2019-07-25vinvalbuf(9): tlseep -> tsleep_nsec(9); ok millert@cheloha
2019-07-19vwaitforio(9): tsleep(9) -> tsleep_nsec(9); ok visa@cheloha
2019-07-19getblk(9): tsleep(9) -> tsleep_nsec(9); ok visa@cheloha
2019-07-12Revert anton@ changes about read/write unlockingsolene
https://marc.info/?l=openbsd-cvs&m=156277704122293&w=2 ok anton@
2019-07-10Make read/write of the f_offset field belonging to struct file MP-safe;anton
as part of the effort to unlock the kernel. Instead of relying on the vnode lock, introduce a dedicated lock per file. Exclusive write access is granted using the new foffset_enter and foffset_leave API. A convenience function foffset_get is also available for threads that only need to read the current offset. The lock acquisition order in vn_write has been changed to match the one in vn_read in order to avoid a potential deadlock. This change also gets rid of a documented race in vn_read(). Inspired by the FreeBSD implementation. With help and ok mpi@ visa@
2019-05-13When killing a process, the signal is handled by any thread thatAlexander Bluhm
does not block the signal. If all threads block the signal, we delivered it to the main thread. This does not conform to POSIX. If any thread unblocks the signal, it should be delivered immediately to this thread. Mark such signals pending at the process instead of a single thread. Then any thread can handle it later. OK kettenis@ guenther@