Age | Commit message (Collapse) | Author |
|
|
|
|
|
to complete matthew@'s commit of a few days ago, and drop __HAVE_CPU_MUTEX_LEVEL
define. With help from, and ok deraadt@.
|
|
MI softfloat code, implementing all MIPS IV specified floating point
operations.
Tested on R5000, R10000, R14000 and Loongson2F.
|
|
|
|
to be visible if _STANDALONE. This will eventually be used by the upcoming
new-and-improved loongson bootblocks (in the works).
|
|
|
|
No functional change.
|
|
ok miod@
|
|
|
|
Instead of previous implementation, we won't use physical cpuid to fetch curcpu().
This requires to implement IP27/35 SMP.
Implemented getcurcpu() and setcurcpu() for it, smp_malloc() renamed alloc_contiguous_pages() because now it only allocate by page.
ok miod@
|
|
processors can display correct data. Now cpu1 on octane is correctly
reported in dmesg.
|
|
be decoupled from the nominal processor speed.
While there, make sure delay() gets a proper delay constant if invoked before
cpu0 attaches (how could I miss that when introducing struct cpu_hwinfo?!?)
|
|
|
|
|
|
allows processors with different cache sizes to be used.
Cache management routines now take a struct cpu_info * as first parameter.
|
|
processor (instead of sys_config.cpu[]), and pass it in the attach_args
when attaching cpu devices.
This allows per-cpu information to be gathered late in the bootstrap process,
and not be limited by an arbitrary MAX_CPUS limit; this will suit IP27 and
IP35 systems better.
While there, use this information to make sure delay() uses the speed
information from the cpu it is invoked on.
|
|
|
|
|
|
ok miod@
|
|
when invoking the cache functions. The physical address is needed when
operating on physically-indexed caches, such as the L2 cache on Loongson
processors.
Preprocessor abuse makes sure that the physical address computation gets
compiled out when running on a kernel compiled for virtually-indexed
caches only, such as the sgi kernel.
|
|
rather than <mips64/param.h>.
For now, kernels are kept at 4KB to give people some time to build 16KB
compatible binaries; this will change before the end of this release cycle.
Use of 16KB page size kernels yields a 18% speedup (which, offset by the
1.6% slowdown caused by the pmap changes, yields a 16.6% overall speedup).
|
|
Also few xheart modification for SMP.
ok miod@
|
|
This function allocates memory using malloc or uvm_pglistalloc, then returns XKPHYS address of allocated memory.
It's for avoid using virtual address on secondary cpus in early stage, and also in TLB handler.
ok miod@
|
|
ok miod@
|
|
define more 64 bit spaces.
|
|
|
|
logical IPL level, and per-platform (IP27/IP30/IP32) code will from the
necessary hardware mask registers.
This allows the use of more than one interrupt mask register. Also, the
generic (platform independent) interrupt code shrinks a lot, and the actual
interrupt handler chains and masking information is now per-platform private
data.
Interrupt dispatching is generated from a template; more routines will be
added to the template to reduce platform-specific changes and share as much
code as possible.
Tested on IP27, IP30, IP32 and IP35.
|
|
sources were masked and saved in ci_ipending, as splx() will unmask what needs
to be unmasked anyway. ci_ipending only now needs to store pending soft
interrupts, so rename it to ci_softpending.
|
|
in the coprocessor 0 status register (coupled with ICR on rm7k/rm9k), and
may be completely alien to real hardware interrupt masks, so don't make
things unnecessary confusing.
|
|
OK miod@
|
|
OK miod@
|
|
cpu_info pointer array, cpu_info iterator, cpu_number() implementation added.
constraint modifier fixed in lock.h to output correct assembly.
calling proc_trampoline_mp in exception.S.
|
|
defined; cp0access.S relies on this.
|
|
in the kernel to be brought in, due to invasive differences in tlb operation.
Comes with a separate cache operations file due to the cache being R5k-style
with R10k-style way number encoding.
|
|
where it can use userret() instead of duplicating it.
|
|
|
|
there to trap.c which is its only user. This also cleans up multiple
inclusion of <machine/cpu.h> (because <machine/psl.h> includes it) in many
places.
|
|
MD code would free resources that couldn't be freed until we were no
longer running in that processor. However, it's is unused on all
architectures since mikeb@'s tss changes on x86 earlier in the year.
ok miod@
|
|
which are uniform for the profclock on each cpu in a SMP system (but using
a different seed for each cpu). on all cpus, avoid seeding with a value out
of the [0, 2^31-1] range (since that is not stable)
ok kettenis drahn
|
|
anything special to prod a cpu to leave the idle loop in signotify.
powerpc, i386, amd64 and sparc64 will follow soon so that everyone has
the same interface to wake an idling cpu.
|
|
For now, sparc64 is arbitrarily set to 256 (only architecture that didn't have
a practical limit in the code on the number of cpus).
|
|
ok miod@
|
|
Right now when mi_switch picks up the same proc, we didn't clear the
flag which would mean that every time we service an AST we would attempt
a context switch. For some architectures, amd64 being probably the
most extreme, that meant attempting to context switch for every
trap and interrupt.
Now we clear_resched explicitly after every context switch, even if it
didn't do anything. Which also allows us to remove some more code
in cpu_switchto (not done yet).
miod@ ok
|
|
code soon. Similar to what ddb does, but does not need ddb to be compiled in.
|
|
Define a symbolic ``cached'' attribute, to be used for cached mappings
regardless of the system's cache coherency.
|
|
when machdep.kbdreset is set, and the correct interrupt is fired,
the machine gets shut down.
with help from and ok jsing@, ok miod@
|
|
|
|
space, either cache coherent for regular mappings and uncached for
BUS_DMA_COHERENT mappings, as done on all other platforms with direct mappings.
|
|
bytes, no functional change.
|