Age | Commit message (Collapse) | Author |
|
ok maja oga guenther
|
|
to return pids, not thread ids, so record the former when performing
operations.
ok blambert
|
|
threads, even when it has changed uid or gid in the past. As is, a
P_SUGID process using rthreads leaks the stack on thread exit.
requested and approved by tedu@ a while ago
|
|
does not cause us to call free if we never malloced.
crash found by & OK marco@
|
|
|
|
dofile{read,write}v, so remove the former and rework it so that everything
uses the latter
"nice" deraadt@ "reads ok" oga@ spastic 'OMG Ponies!!!!' weingart@
|
|
calling M_PREPEND is now #define'd to be calling m_prepend.
Shaves an unknown but assumed-to-be-nontrivial amount from the kernel.
ok claudio@ henning@(who may have had to okay this twice to get me to notice)
|
|
is invoked with the pool mutex held, the asserts are satisfied by design.
ok tedu@
|
|
ok blambert@
|
|
not read garbage values as partitions... which we then put into the spoofed
label... and which would lead disklabel -A to make surprising decisions.
earlier versions which did too much validation tested by many
|
|
|
|
sensor. Based on msts(4). Tested with Praecis Ct
(http://www.endruntechnologies.com/network-time-source.htm).
help and feedback mbalmer
'no problem with this sensor going in' deraadt
|
|
apparently a leftover from tty_nmea.c
|
|
should not stop the spoofing process. Setting 'wander' means when
we are done with this MBR, read the next one.
Problem noted and fix tested by Nick Guenther.
ok weingart@ (I think), deraadt@
|
|
M_ZERO; ok deraadt
|
|
and add a missing argument to one of the printf calls.
ok art@
|
|
- getnewbuf dies. instead of having getnewbuf, buf_get, buf_stub and
buf_init we now have buf_get that is smaller than some of those
functions were before.
- Instead of allocating anonymous buffers and then freeing them if we
happened to lose the race to the hash, always allocate a buffer knowing
which <vnode, block> it will belong to.
- In cluster read, instead of allocating an anonymous buffer to cover
the whole read and then stubs for every buffer under it, make the
first buffer in the cluster cover the whole range and then shrink it
in the callback.
now, all buffers are always on the correct hash and we always know their
identity.
discussed with many, kettenis@ ok
|
|
idle proc. p_cpu might be necessary in the future and pegging is just
to be extra safe (although we'll be horribly broken if the idle proc
ever ends up where that flag is checked).
|
|
in pool_init so you the pool struct doesn't have to be zeroed before
you init it.
|
|
something to do. Walk the highest priority queue looking for a proc
to steal and skip those that are pegged.
We could consider walking the other queues in the future too, but this
should do for now.
kettenis@ guenther@ ok
|
|
in sysctl hw.ncpufound; ok miod kettenis
|
|
has all space allocated such that we can make holes in it using extent_free().
ok miod@
|
|
"go for it" tedu@
|
|
From Kirill Timofeev
|
|
- Split up choosing of cpu between fork and "normal" cases. Fork is
very different and should be treated as such.
- Instead of implicitly choosing a cpu in setrunqueue, do it outside
where it actually makes sense.
- Just because a cpu is marked as idle doesn't mean it will be soon.
There could be a thundering herd effect if we call wakeup from an
interrupt handler, so subtract cpus with queued processes when
deciding which cpu is actually idle.
- some simplifications allowed by the above.
kettenis@ ok (except one bugfix that was not in the intial diff)
|
|
allocations, making sure that the union of all space is allocated.
ok deraadt@
|
|
provides, while keeping this behaviour for extent_print_all() which is
only called by ddb. Based on a diff from deraadt@.
|
|
needed, but some machines seem to work much better with it.
|
|
forever.
|
|
of the struct proc* as the identifier for SEM_UNDO tracking and only
call semexit() from the original thread, once the process as a whole
is exiting
ok tedu@
|
|
ok art@, henning@
|
|
on the disk lock we can find that code rather than wondering where "sd0"
gets passed to tsleep.
ok deraadt@
|
|
MD code would free resources that couldn't be freed until we were no
longer running in that processor. However, it's is unused on all
architectures since mikeb@'s tss changes on x86 earlier in the year.
ok miod@
|
|
ok blambert@
|
|
allows threaded programs to concurrently update the events field while
a thread is blocked in poll(2).
okay deraadt@ millert@
|
|
|
|
this ensures we ignore counting any buffers returning through biodone()
for which B_PHYS has been set - which should be set on all transfers
that manually do raw io bypassing the buffer cache by setting up their
own buffer and calling strategy..
ok thib@, todd@, and now that he is a buffer cache and nfs hacker oga@
|
|
- Split up run queues so that every cpu has one.
- Make setrunqueue choose the cpu where we want to make this process
runnable (this should be refined and less brutal in the future).
- When choosing the cpu where we want to run, make some kind of educated
guess where it will be best to run (very naive right now).
Other:
- Set operations for sets of cpus.
- load average calculations per cpu.
- sched_is_idle() -> curcpu_is_idle()
tested, debugged and prodded by many@
|
|
ok deraadt@ fgs@
|
|
levels. This will allow for platforms where soft interrupt levels do not
map to real hardware interrupt levels to have soft ipl values overlapping
hard ipl values without breaking spl asserts.
|
|
|
|
"probably does not need a crank, so perhaps just go for it" deraadt@
|
|
path being looked up *is* a symlink, it checks if it *is not* a
symlink and returns the vnode looked up in that case.
ok thib@
|
|
unconditionally.
ok miod@
|
|
NetBSD.
ok kurt@, drahn@, miod@
|
|
wheel). This was safe, except for osiop bugs.
|
|
the panic string.
|
|
value (based on physmem) is below NKMEMPAGES_MIN, we are on a low memory
machine and can not afford more anyway.
ok deraadt@ tedu@
|
|
the receiving side when passing fd's. ok deraadt@ kettenis@
|
|
|