Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
Based on our existing RA module for 11n.
The main difference is in dealing with 11ac-specific ratesets.
Tx rate selection heuristics remain identical.
Only supports 80MHz channels, for now. 160MHz is left for future work.
ok sthen@
|
|
lm700x: az, rt
tc921x: sfr
pt2254a: sfr, sf2r
|
|
used by wdsc on sgi (removed in 2021)
ok krw@
|
|
|
|
|
|
|
|
the big change is removing the integration with and reliance on
bridge(4) for learning vxlan endpoints. we have the etherbridge
layer now (which is used by veb, nvgre, bpe, etc) so vxlan can
operate independently of bridge(4) (or any other driver) while still
dynamically learning about other endpoints.
vxlan now uses the udp socket upcall mechanism to receive packets.
this means it actually creates and binds udp sockets to use rather
adding code in the udp layer for stealing packets from the udp
layer.
i think it's also important to note that this adds loop prevention
to the code. this stops a vxlan interface being used to transmit a
packet that was encapsulated in itself.
i want to clear this out of my tree where it's been sitting for
nearly a year. noone seems too concerned with the change either
way.
ok claudio@
|
|
This splits out the MI sequencing, backing it with per-architecture helper
functions. Further steps will be neccesary because ACPI and MD are too
tightly coupled, but soon we'll be able to use this code for more architectures
(which depends on figuring out the lowest-level cpu sleeping method)
ok kettenis
|
|
OK deraadt@ phessler@
|
|
|
|
if it is supported. Remove it from the global GENERIC config.
OK visa@ claudio@
|
|
API implemented is a deadend.
OK akoshibe@ yasuoka@ deraadt@ kn@ patrick@ sthen@
|
|
|
|
|
|
|
|
ok deraadt@
|
|
|
|
this allows us to dynamically trace function boundaries with btrace by patching
prologues and epilogues with a breakpoint upon which the handler records the data,
sends it back to userland for btrace to consume.
currently it's hidden behind DDBPROF, and there is still a lot to cleanup and
improve, but basic scripts that observe return codes from a probed function
work.
from Tom Rollet, with various changes by me
feedback and ok mpi@
|
|
|
|
ok deraadt@ kettenis@
|
|
OK deraadt@
|
|
Support to skip frames is missing on arm64 and i386, but the stack
traces are useful anyway. sparc64 should work, but I could not
test it. Other architectures do not have stacktrace_save_at() and
dynamic tracer does not link.
from patrick@; OK semarie@
|
|
|
|
|
|
Keeping files in CVS HEAD for now until we are certain we're not going back.
ok deraadt@
|
|
Otherwise compiling a kernel witout any other wifi drivers fails.
OK patrick deraadt
|
|
ok deraadt@
|
|
|
|
OK deraadt@ mpi@
|
|
Written by Christian Ehrhardt and myself, based on ieee80211_mira.c
but with significant changes.
The main difference is that RA does not attempt to precisely measure
actual throughput but simply deducts a loss percentage from the
theoretical throughput which can be achieved by a given MCS.
Unlike MiRa, RA does not use timeouts to trigger probing.
Probing is triggered only by changes in measured throughput.
Unlike MiRA, RA doesn't care whether a frame was part of an A-MPDU.
RA simply collects statistics for individual subframes. This makes reporting
very easy for drivers and seems to work well enough in practice.
Another difference is that drivers can report multi-rate retries properly
via ieee80211_ra_add_stats_ht(mcs, total, fail) which can be called
several times before ieee80211_ra_choose() selects a new Tx rate.
There is no reason any issues could not be fixed in ieee8011_mira.c but
I felt it was a good moment to burn the house down and start over.
And since this code diverges from how MiRA is described in the research
paper applying the "MiRA" label becomes inappropriate.
|
|
apart from the semantic differences between bridge(4) and veb(4),
the only missing bits in veb(4) is the transparent ipsec interception
support, and spanning tree.
|
|
my intention is to replace bridge(4), but the way it works is
different enough from from bridge that a name change is justified
to distinguish them. it also makes it easier to commit it to the
tree and work on it in parallel to bridge, and allows a window of
migration.
the main difference between veb(4) and bridge(4) is how they use
interfaces as ports. veb takes over interfaces completely and only
uses them to receive and transmit ethernet packets. bridge also use
each interface as a port to the ethernet segment it's connected to,
but also tries to continue supporting the use of the interface as
a way to talk to the network stack on the local system. supporting
the use of interfaces for both external and local communication is
where most of my confusion with bridge comes from, both when i'm
trying to operate it and also understand the code. changing this
semantic is where most of the simplification in veb comes from
compared to bridge.
because veb takes over interfaces, the ethernet network set up on
a veb is isolated from the host network stack. by default veb does
not interact with pf or the ip (and mpls) stacks. to enable pf for
ip frames going over veb ports link1 on the veb interface must be
set. to have the stack interact with a veb network, vport interfaces
must be created and added as ports to a veb.
the vport interface driver is provided as part of veb, and is handled
specially by veb. veb usually prevents the use of ports by the stack
for sending an receiving packets, but that's why vports exist, so
veb has special handling for them.
veb already supports a lot of the other features that bridge has,
including bridge rules and protected domains, but i got tired of
working out of the tree and stopped implementing them. the main
outstanding features is better address table management, the
blocknonip flag on ports, transparent ipsec interception, and
spanning tree. i may not bother with spanning tree unless someone
tells me that they actually use it.
the core ethernet learning bridge functionality is provided by the
etherbridge code that was factored out of nvgre and bpe. veb is
already (a lot) faster than bridge, and is better prepared to operate
in parallel on multiple CPUs concurrently.
thanks to hrvoje popovski for testing some earlier versions of this.
discussed with many
ok patrick@ jmatthew@
|
|
the "ports" that nvgre provides to etherbridge are ip addresses
used in the underlay network.
ok patrick@ jmatthew@
|
|
it's pretty straightforward since etherbridge was mostly based on
this code in the first place. the etherbridge_ops that bpe provides
to etherbridge set entries up to point at mac addresses in the
underlay network.
ok patrick@ jmatthew@
|
|
this allows for the factoring out of the learning bridge code i
wrote in bpe and nvme, and should be reusable for other drivers
needing a mac learning bridge.
the core data structures are an etherbridge struct to represent the
learning bridge, eb_entry structs for each mac address entry that
the bridge knows about, and an etherbridge_ops struct that drivers
fill in so that they can use this code.
eb_entry structs are stored in a hash table made up of SMR_TAILQs
to support lookups of entries quickly and concurrently in the
forwarding path. they are also stored in a locked red-black tree
to help manage the uniqueness of the mac address in the table.
the etherbridge_ops handlers mostly deal with comparing and testing
the "ports" associated with mac address table entries. the "port"
that a mac address entry is associated with is opaque to the
etherbridge code, which allows for this code to be used by nvgre
and bpe which map mac addresses inside the bridge to addresses in
their underlay networks. it also supports traditional bridges where
"ports" are actual interfaces.
ok patrick@ jmatthew@
|
|
The RAID1C discipline encrypts data like the CRYPTO discipline, and accepts
multiple chunks during creation and assembly like the RAID1 discipline.
To deal with failing disks a RAID1C volume may be assembled with a smaller
number of chunks than the volume was created with. The volume will then come
up in degraded state. If the volume is now detached and assembled again with
the correct number of chunks, any re-added chunks will require a rebuild.
Consequently, assembling RAID1C volumes requires careful attention to the
chunks passed via 'bioctl -l'. If a chunk is accidentally omitted from the
command line during volume assembly, then this chunk will need to be rebuilt.
At least one known-good chunk is required in order to assemble the volume.
Like CRYPTO, RAID1C supports passphrase and key-disk authentication.
Key-disk based volumes are assembled automatically if the key disk is present
while the system is booting up.
Unlike CRYPTO and RAID1, there is no boot support for RAID1C yet.
RAID1C largely reuses existing code of RAID1 and CRYPTO disciplines.
At present RAID1C's discipline-specific data structure is shared with that
of the CRYPTO discipline to allow re-use of existing CRYPTO code. A custom
RAID1C data structure would require CRYPTO code to access struct sr_crypto
via a pointer instead of via a member field of struct sr_discipline.
ok jsing@
|
|
|
|
OK deraadt@
|
|
The global "tickadj" variable is a remnant of the old NTP adjustment
code we used in the kernel before the current timecounter subsystem
was imported from FreeBSD circa 2004 or 2005.
Fifteen years hence it is completely vestigial and we can remove it.
We probably should have removed it long ago but I guess it slipped
through the cracks. FreeBSD removed it in 2002:
https://cgit.freebsd.org/src/commit/?id=e1d970f1811e5e1e9c912c032acdcec6521b2a6d
NetBSD and DragonflyBSD can probably remove it, too.
We export tickadj via the kern.clockrate sysctl(2), so update sysctl.2
and sysctl(8) accordingly. Hypothetically this change could break
someone's sysctl(8) parsing script. I don't think that's very likely.
ok mvs@
|
|
in preparation for upcoming ACPI-attachment.
ok kettenis@
|
|
ok deraadt@
|
|
ok deraadt@
|
|
|
|
ok deraadt@
|
|
|
|
this fell out of a discussion with mortimer
ok kettenis
|