Age | Commit message (Collapse) | Author |
|
|
|
x86 __mp_lock changes, but keeping the internal __cpu_simplelock_t to
guarantee atomic access to the __mp_lock fields.
|
|
|
|
sys/dev/pci/pciide.c from naddy@
|
|
code. At this moment all architectures get the copy of the old code
except i386 which gets a new shiny implementation that doesn't spin
at splhigh (doh!) and doesn't try to grab the biglock when releasing
the biglock (double doh!).
Shaves 10% of system time during kernel compile and might solve a few
bugs as a bonus.
Other architectures coming shortly.
miod@ deraadt@ ok
|
|
|
|
|
|
directive can select between MI and MD versions of these files. At
the same time, adjust the boot programs to pick exactly what they need,
instead of the 7 or 8 mechanisms previously used.
There will be some fallout from this, but testing it all by myself is a
ridiculously slow process; it will be finished in-tree.
Various developers were very nice and avoided making fun of me when I
was gibbering in the corner..
|
|
decide which files must be pulled into the kernel. Also conditionalize
the pulling of those files based on the COMPAT_* options.
|
|
it's amazing things didn't break.
|
|
and 197DP to route interrupts to the processor we're booting on. This allows
a 197DP to run when booting from the second cpu.
|
|
combos (MVME197SP/DP), and implement supposedly smarter cache routines.
There is still room for improvement, however, cache flush operation errata
permissing.
Tested on 197LE and 197DP.
|
|
whenever necessary, instead of duplicating the same code 10+ times.
|
|
|
|
for the current processor. And remove now unused cmmu_flush_data_page().
|
|
and will not require such a lock.
|
|
broken now).
|
|
IPL_NONE; fixes a false splassert warning on boot.
|
|
appropriate types. No functional change.
|
|
|
|
boards such as mvme1[89]7 where spl changes can be atomic.
|
|
been years since it has last been used for that purpose, so name it the
initialization/startup stack.
While there, do not store the initialization stack in cpu_info, and have
secondary_pre_main() return its value so that the bootstrap code does not
need to fetch it from cpu_info.
This might be reconsidered when the startup stacks will be freed after they
are not used anymore, but there are more things to do first.
|
|
a function, so that it does not get reloaded from cr17 every time.
|
|
since the corresponding interrupt source is enabled on the primary processor
only.
|
|
a single ci_flags bitfield.
Also, set_cpu_number() will no longer set CIF_PRIMARY on the primary processor,
it's up to the initialization code to do this.
|
|
|
|
drift in MP kernels.
|
|
no functional change.
|
|
gets handled like a real hardware interrupt (which it is supposed to mimic
anyway).
|
|
one, so that we can have maskable and unmaskable IPIs. Make the clock ipis
maskable, and masked at IPL_CLOCK and above. This allows us to get rid
of the retrig hack in setipl().
|
|
- only process the pending ipis once per external interrupt, at the beginning.
- use the ipl we were at when the interrupt occured, not the ipl at which
we enabled interrupts again, in order to decide whether we can run hardclock
or statclock.
|
|
stack a startup stack.
|
|
ok miod
|
|
and put the process it was running back on the run queue (unless this was
the idle proc).
|
|
before they are started (and not skipping gaps for machine setups with
holes in cpu slots). Since we start secondary cpus very late in the boot
process, and sched_init_cpu() has to be invoked before proc0 execve's init,
I don't think there is a better way to do this.
This lets MVME188 systems with more than one processor boot multiuser.
|
|
output, and nothing else.
|
|
|
|
rather the expecting it to do this for us.
|
|
- Move the functionality of choosing a process from cpu_switch into
a much simpler function: cpu_switchto. Instead of having the locore
code walk the run queues, let the MI code choose the process we
want to run and only implement the context switching itself in MD
code.
- Let MD context switching run without worrying about spls or locks.
- Instead of having the idle loop implemented with special contexts
in MD code, implement one idle proc for each cpu. make the idle
loop MI with MD hooks.
- Change the proc lists from the old style vax queues to TAILQs.
- Change the sleep queue from vax queues to TAILQs. This makes
wakeup() go from O(n^2) to O(n)
there will be some MD fallout, but it will be fixed shortly.
There's also a few cleanups to be done after this.
deraadt@, kettenis@ ok
|
|
previously applied to other archs deleting a memset() this time. e.g.
- if ((mapstore = malloc(mapsize, M_DEVBUF,
- (flags & BUS_DMA_NOWAIT) ? M_NOWAIT : M_WAITOK)) == NULL)
+ if ((mapstore = malloc(mapsize, M_DEVBUF, (flags & BUS_DMA_NOWAIT) ?
+ (M_NOWAIT | M_ZERO) : (M_WAITOK | M_ZERO))) == NULL)
return (ENOMEM);
- memset(mapstore, 0, mapsize);
|
|
|
|
In ip_esp.c all allocated memory is now zero'd in the
"malloc(sizeof(*tc) + alen ..." case. The +alen memory was not
initialized by the bzero() call. Noticed by chl@.
"Looks good" art@ "seems ok" chl@
|
|
ok art@
|
|
|
|
kernel builds locally this doesnt change much but over NFS this
cuts about 12% of the build time on my setup (i386).
OK miod@, deraadt@.
|
|
cpu_disklabel can go away, since nothing anymore needs to use it; ok miod
|
|
|
|
|
|
|
|
to support hotplug media on most architectures. disklabel setup and
verification done using new helper functions. Disklabels must *always*
have a correct checksum now. Same code paths are used to learn on-disk
location disklabels, to avoid new errors sneaking in. Tested on almost all
cases, testing help from todd, kettenis, krw, otto, dlg, robert, gwk, drahn
|