Age | Commit message (Collapse) | Author |
|
|
|
similiar lines so drm shouldn't either.
|
|
ok deraadt@, guenther@, dlg@
|
|
|
|
|
|
bits from it.
ok krw@ kettenis@
|
|
archs and different sized disk sectors. Make MBR have higher priority
than GPT. Add many paranoia checks and associated DPRINTF's to make
further development easier. Keep everything hidden behind #ifdef
GPT.
Tested and ok doug@ mpi@. Nothing bad seen by millert@.
|
|
delete coredump_trad(), uvm_coredump(), cpu_coredump(), struct md_coredump,
and various #includes that are superfluous.
This leaves compat_linux processes without a coredump callback. If that
ability is desired, someone should update it to use coredump_elf32() and
verify the results...
ok kettenis@
|
|
had a proper stdint.h. No ports fallout. OK guenther@ miod@
|
|
ok guenther@ deraadt@
|
|
a slightly conmplicated dance where we stash the PAE PDPTEs into the
hibernate resume pagetables and use those before turning off PAE.
Makes (un)hibernate work with the new PAE pmap.
ok mlarkin@
|
|
PAE pmap.
ok deraadt@, mlarkin@
|
|
support we're only wasting memory on the larger PAE page tables without
any real benefit. This allows some simplifications of the low-level
assembly code.
ok mlarkin@, deraadt@
|
|
using that much memory, go for it" tedu@ "I don't see any immediate downsides"
kettenis@
|
|
the extra CLD instructions from when that wasn't true
testing miod@ krw@
|
|
NX bit for userland and kernel W^X. Unlike the previous c.2008 PAE
experiment, this does not provide > 4GB phys ram on i386 - PAE is solely
being used for NX capability this time. If you need > 4GB phys, use amd64.
Userland W^X was committed yesterday by kettenis@, and we will shortly
start reworking the kernel like we did for amd64 a few months back to get
kernel W^X.
This has been in snaps for a few days and tested by kettenis and myself
as well.
ok deraadt@, kettenis@
|
|
mapping for the first page when tearing things down. Seems to fix the last
bug mlarkin@ has been chasing for a while.
ok mlarkin@
|
|
while we're chasing at least one remaining bug.
ok mlarkin@, deraadt@
|
|
discussed with deraadt
|
|
cleanup, responsible for various reaper panics pointed out on bugs@ this
morning.
ok deraadt@
|
|
commit.
ok deraadt@
|
|
ok kettenis@
|
|
ok armani, guenther, sthen
|
|
This commit ports the infrastructure to do binary code patching from amd64.
The existing code patching for SMAP is converted to the new infrastruture.
ok kettenis@
"should go in" deraadt@
|
|
interpretation of it isn't quite right. So instead of allocating memory
and slicing it based on the parameters returned by CPUID, simply use a member
in struct cpu_info like basically all other OSes out there do. Our struct
cpu_info is large enough to never cause any overlap. This makes the
mwait-based idle loop actually work. We still execute the CPUID instruction
to make sure monitor/mwait is properly supported by the hardware we're
running on.
ok sthen@, deraadt@, guenther@
|
|
EIP/RIP adjustment for ERESTART
ok mlarkin@
|
|
unification...
|
|
unneeded disable/enable_intr sequence around the PTE unmap operation.
|
|
reduce differences between PAE and no-PAE i386 pmaps.
|
|
certain pmap structures are allocated.
No functional change.
|
|
to deraadt, then myself) brings the PAE pmap on i386 (not touched in any
significant way for years) closer to the current non-PAE pmap and allows
us to take a big next step toward better i386 W^X in the kernel (similar to
what we did a few months ago on amd64). Unlike the original PAE pmap, this
diff will not be supporting > 4GB physical memory on i386 - this effort is
specifically geared toward providing W^X (via NX) only.
There still seems to be a bug removing certain pmap entries when PAE is
enabled, so I'm leaving PAE mode disabled for the moment until we can
figure out what is going on, but with this diff in the tree hopefully
others can help.
The pmap functions now operate through function pointers, due to the need
to support both non-PAE and PAE forms. My unscientific testing showed
less than 0.3% (a third of a percent) slowdown with this approach during
a base build.
Discussed for months with guenther, kettenis, and deraadt.
ok kettenis@, deraadt@
|
|
"sure" deraadt@
|
|
in use) now so that libkvm can be fixed before the rest of the bulk of PAE
support is committed.
requested by and ok deraadt@
|
|
happier
ok dlg@ jsing@ kettenis@ mlarkin@
|
|
it with respect to other instructions.
ok gunether@, mlarkin@
|
|
they do "interesting" things with APIs i want to change, and i can't
find any evidence anyone uses them anymore. instead of burning time
on changes i cant test, ill take a chance that noone will miss them.
no objections from anyone
ok mpi@ deraadt@ henning@ sthen@
|
|
|
|
handling into RAMDISK. This is now possible because the install media
has ample room. The goal is to reduce special cases where we may be
skipping (unknown) important operations...
ok mlarkin kettenis
|
|
change, just moving a few hundred lines of comments from one place to
another. Note that some of these comments are giant lies that will get
rewritten later.
ok deraadt@
|
|
mpsafe. Most (all?) other architectures now use pools for this, including
non-direct pmap architetcures like sparc and sparc64. Use a special back-end
allocator for pool pages to solve bootstrapping problems. This back-end
allocator allocates the initial pages from kernel_map, switching to the
uvm_km_page allocator once the pmap has been fully initialized. The old
pv entry allocator allocated pages from kmem_map. Using the uvm_km_page
allocator avoids certain locking issues, but might change behaviour under
kva pressure. Time will tell if that's a good or a bad thing.
ok mlarkin@, deraadt@
|
|
reported by Michael (lesniewskister (at) gmail.com)
ok miod@ (who did all the hard work)
|
|
Use this on vax to correctly pick the end of the stack area now that the
stackgap adjustment code will no longer guarantee it is a fixed location.
|
|
machine/lock.h as appropriate.
|
|
support cpuid with other values than zero, and leave the ecx register
unchanged.
ok kettenis@
|
|
each arch used to have to provide an rw_cas operation, but now we
have the rwlock code build its own version. on smp machines it uses
atomic_cas_ulong. on uniproc machines it avoids interlocked
instructions by using straight loads and stores. this is safe because
rwlocks are only used from process context and processes are currently
not preemptible in our kernel. so alpha/ppc/etc might get a benefit.
ok miod@ kettenis@ deraadt@
|
|
ok guenther@
|
|
|
|
- rename uiomove() to uiomovei() and update all its users.
- introduce uiomove(), which is similar to uiomovei() but with a size_t.
- rewrite uiomovei() as an uiomove() wrapper.
ok kettenis@
|
|
tested with and without firmware files.
OK stsp@ deraadt@
|
|
- The sensor framework cannot fetch values on the right cpu
- sensor_task_register() calls malloc, and calling it is inapproapriate
ok guenther
|