Age | Commit message (Collapse) | Author |
|
sensor. Based on msts(4). Tested with Praecis Ct
(http://www.endruntechnologies.com/network-time-source.htm).
help and feedback mbalmer
'no problem with this sensor going in' deraadt
|
|
apparently a leftover from tty_nmea.c
|
|
should not stop the spoofing process. Setting 'wander' means when
we are done with this MBR, read the next one.
Problem noted and fix tested by Nick Guenther.
ok weingart@ (I think), deraadt@
|
|
M_ZERO; ok deraadt
|
|
and add a missing argument to one of the printf calls.
ok art@
|
|
- getnewbuf dies. instead of having getnewbuf, buf_get, buf_stub and
buf_init we now have buf_get that is smaller than some of those
functions were before.
- Instead of allocating anonymous buffers and then freeing them if we
happened to lose the race to the hash, always allocate a buffer knowing
which <vnode, block> it will belong to.
- In cluster read, instead of allocating an anonymous buffer to cover
the whole read and then stubs for every buffer under it, make the
first buffer in the cluster cover the whole range and then shrink it
in the callback.
now, all buffers are always on the correct hash and we always know their
identity.
discussed with many, kettenis@ ok
|
|
idle proc. p_cpu might be necessary in the future and pegging is just
to be extra safe (although we'll be horribly broken if the idle proc
ever ends up where that flag is checked).
|
|
in pool_init so you the pool struct doesn't have to be zeroed before
you init it.
|
|
something to do. Walk the highest priority queue looking for a proc
to steal and skip those that are pegged.
We could consider walking the other queues in the future too, but this
should do for now.
kettenis@ guenther@ ok
|
|
in sysctl hw.ncpufound; ok miod kettenis
|
|
has all space allocated such that we can make holes in it using extent_free().
ok miod@
|
|
"go for it" tedu@
|
|
From Kirill Timofeev
|
|
- Split up choosing of cpu between fork and "normal" cases. Fork is
very different and should be treated as such.
- Instead of implicitly choosing a cpu in setrunqueue, do it outside
where it actually makes sense.
- Just because a cpu is marked as idle doesn't mean it will be soon.
There could be a thundering herd effect if we call wakeup from an
interrupt handler, so subtract cpus with queued processes when
deciding which cpu is actually idle.
- some simplifications allowed by the above.
kettenis@ ok (except one bugfix that was not in the intial diff)
|
|
allocations, making sure that the union of all space is allocated.
ok deraadt@
|
|
provides, while keeping this behaviour for extent_print_all() which is
only called by ddb. Based on a diff from deraadt@.
|
|
needed, but some machines seem to work much better with it.
|
|
forever.
|
|
of the struct proc* as the identifier for SEM_UNDO tracking and only
call semexit() from the original thread, once the process as a whole
is exiting
ok tedu@
|
|
ok art@, henning@
|
|
on the disk lock we can find that code rather than wondering where "sd0"
gets passed to tsleep.
ok deraadt@
|
|
MD code would free resources that couldn't be freed until we were no
longer running in that processor. However, it's is unused on all
architectures since mikeb@'s tss changes on x86 earlier in the year.
ok miod@
|
|
ok blambert@
|
|
allows threaded programs to concurrently update the events field while
a thread is blocked in poll(2).
okay deraadt@ millert@
|
|
|
|
this ensures we ignore counting any buffers returning through biodone()
for which B_PHYS has been set - which should be set on all transfers
that manually do raw io bypassing the buffer cache by setting up their
own buffer and calling strategy..
ok thib@, todd@, and now that he is a buffer cache and nfs hacker oga@
|
|
- Split up run queues so that every cpu has one.
- Make setrunqueue choose the cpu where we want to make this process
runnable (this should be refined and less brutal in the future).
- When choosing the cpu where we want to run, make some kind of educated
guess where it will be best to run (very naive right now).
Other:
- Set operations for sets of cpus.
- load average calculations per cpu.
- sched_is_idle() -> curcpu_is_idle()
tested, debugged and prodded by many@
|
|
ok deraadt@ fgs@
|
|
levels. This will allow for platforms where soft interrupt levels do not
map to real hardware interrupt levels to have soft ipl values overlapping
hard ipl values without breaking spl asserts.
|
|
|
|
"probably does not need a crank, so perhaps just go for it" deraadt@
|
|
path being looked up *is* a symlink, it checks if it *is not* a
symlink and returns the vnode looked up in that case.
ok thib@
|
|
unconditionally.
ok miod@
|
|
NetBSD.
ok kurt@, drahn@, miod@
|
|
wheel). This was safe, except for osiop bugs.
|
|
the panic string.
|
|
value (based on physmem) is below NKMEMPAGES_MIN, we are on a low memory
machine and can not afford more anyway.
ok deraadt@ tedu@
|
|
the receiving side when passing fd's. ok deraadt@ kettenis@
|
|
|
|
since it is essentially free. To turn on the checking of the rest of the
allocation, use 'option POOL_DEBUG'
ok tedu
|
|
between releases we may want to turn it on, since it has uncovered real
bugs)
ok miod henning etc etc
|
|
|
|
ok miod@
|
|
did not care either and with this packets from drivers with external buffers
(e.g. wpi(4)) would trigger this panic through pf(4).
Found the hard way by Tim van der Molen tbvdm (at) xs4all (dot) nl
|
|
on; prompted by Thorsten Glaser; ok miod@ krw@
|
|
(M_TRAILINGSPACE()) and allocate one cluster if needed (instead of chaining
many mbufs). Somewhat needed for the rl(4) fix to ensure that the ethernet
header is in one mbuf for sure. Tested by landry@ and myself
|
|
got multiple signals before tsleep() could wakeup. Also, POSIX
says that sigwait() should never return EINTR, so map that to
ERESTART.
ok kurt@, tedu@
fixes the panic encountered by ariane@ with kaffe
|
|
POSIX 1003.1-2008, with compatibility macros for the names used in
previous version of OpenBSD. Update all the references in the
kernel to use the new, standard member names.
ok'ed by miod@, otto@; ports build test by naddy@
|
|
to prevent the hwm growing beyond that. this allows the livelock mitigation
to do something where the hwm used to grow beyond twice the rx rings size.
ok kettenis@ claudio@
|
|
OK dlg@
|