Age | Commit message (Collapse) | Author |
|
Tested by Hrvoje Popovski, inputs and ok visa@
|
|
it works less well when you look before the adj
|
|
|
|
|
|
the mbuf prio will still be set according to the llprio value, but the
tos on the packet may be forced to a specific number by txprio
|
|
rfc1853 is about IP in IP Tunneling. rfc2003 about IP Encapsulation
within IP agrees.
|
|
|
|
for l3 interfaces (gre and mgre), allow txprio from the payload,
the mbuf, or a hardcoded value. for l2 interfaces (egre, ngre, and
eoip), get txprio from the mbuf or a hardcoded value.
ok claudio@
|
|
gif encaps l3, so it can get a prio from the payload, as well as
from the mbuf itself, or a hardcoded value.
ok claudio@
|
|
etherip puts the prio in the encapsulating ip header, and supports
using hardcoded prio values or the prio from the mbuf. it encapsulates
ethernet, which doesnt have a prio field unelss you parse the ether
payload, which is not worth it.
ok claudio@
|
|
ok claudio@
|
|
a tx header prio can set to a fixed value from 0 to 7, or magic
values to represent populating the prio field from the encapsulated
packet, or from the mbuf prio value.
ok claudio@
|
|
802.11 interface state changes (e.g. SSID) to interested parties.
Original diff from phessler@. Many suggestions and tweaks from
claudio@, stsp@, anton@.
ok claudio@ stsp@ anton@ phessler@
|
|
|
|
this prevents creation of tap and tun devices that you cannot open
from userland because of the limit on the number of dev_t minor
numbers.
the lack of limit was pointed out by Greg Steuck
ok deraadt@ guenther@
|
|
|
|
|
|
the llprio is already used to set the gre and eoip packet tos/tclass,
but it was queued at the default prio before this.
|
|
llprios are valued 0 to 7, while the ip tos/dscp/tclass is an 8 bit
value. fortunately the high 3 bits map nicely to the llprio values,
so we shift the llprio into place when generating the keepalive
frames. the llprio is defaulted to the value that cisco uses for
their gre keepalives.
|
|
m_leadingspace() and m_trailingspace(). Convert all callers to call
directly the functions and remove the defines.
OK krw@, mpi@
|
|
|
|
by making all handlers consistent.
ok bluhm@, visa@
|
|
the timeout gets configured instead of gre_up().
this avoids complex gre_ioctl() ordering rules and
enables the sc_ka_hold timeout before the first packet
is received.
from markus@
|
|
OK bluhm@ kn@
|
|
HFSC on a vlan(4) (or similar) interface caused all packets over
that interface to get marked with the highest packet priority, no
matter what the rest of the system said about it. Leaving
the prio alone lets the rest of the network still do something
useful, not matter whether the local system queues packets in a
particular way.
Reported by and fix tested by Adrian Close
ok claudio@ kn@ mikeb@
|
|
|
|
from markus@
|
|
|
|
check sc_tunnel.t_af for AF_UNSPEC, otherwise we panic in gre_encap()
from markus@
|
|
gre_keepalive_send() should re-schedule immediately, otherwise we
stop sending keepalive on temporary mbuf shortage or if the
configuration is incomplete.
from markus@
|
|
The packet processing done after the protocol detection effectively
gets thrown away by the keepalive handling, so this saves some time,
and avoids confusing tcpdump on the interface. Keepalives the driver
transmits aren't made available for bpf, so taking it away from the
receive side is consistent.
discussed with and tested by markus@
|
|
Regression has been introduced in version 1.1024 (a 6.2 time frame).
It's been discovered and reported by Fabian Mueller-Knapp. Fair amount
of credit goes to kn@, benno@ and henning@ for pointing me to releveant
section of pf.conf(5). Fabian and kn@ also did test the patch.
OK kn@, henning@
|
|
this gives ipv6 handling equivalent the tos stuff in ipv4.
ok visa@ benno@
|
|
Replace hardcoded 0 and implicit checks with enum as done in all other
use cases of `pfra_fback'. No object change.
OK sashan
|
|
When evaluating the anchor's ruleset, prevent clobbering it's very own
`quick' test result by blindly setting it.
This makes the following pf.conf work as intended (packets would be blocked
since `quick' had no effect):
anchor quick {
pass
}
block
Broken since after 6.1 release as reported by Fabian Mueller-Knapp, thanks!
OK henning sashan
|
|
When a pfsync interface is being deleted, all its timeout handlers and
pfsync_send_dispatch() have to stop accessing the software context
before the context is freed. Ensure sufficient synchronization by
acquiring NET_LOCK() and clearing `pfsyncif' inside the critical
section in pfsync_clone_destroy(). When a timeout handler has entered
the critical section, it has to check `pfsyncif' and bail out if the
value is NULL. pfsync_send_dispatch() already does this check.
Issue reported and fix tested by Hrvoje Popovski.
OK mpi@ bluhm@
|
|
OK bluhm@
|
|
This fixes certain operations such as `pfctl -t foo -T show' when the
system is in "Highly secure mode". `pfctl -t foo -T show -v' would already
work due to a different ioctl (DIOCRGETASTATS) being used.
Reported by Zbyszek ŻóÅkiewski, thanks!
OK sthen sashan
|
|
Wireless drivers call if_enqueue() out of the NET_LOCK() so it cannot
be used to serialize bridge(4) states.
Found by stsp@, ok visa@
|
|
ok visa@
|
|
RTF_LOCAL entries or static ARP entries don't have parents, so the logic
was incorrect. Note that it might be possible to extend the logic to work
with non-cloned L2 entries but the few use cases do not justify the
complexity (yet).
Problem reported & fix tested by Elie Bouttier.
ok bluhm@, visa@, claudio@
|
|
Tested by Hrvoje Popovski who measured a 30% improvement of forwarded
packets in the best case.
ok visa@
|
|
ok bluhm@, visa@
|
|
start locking the socket. An inp can be referenced by the PCB queue
and hashes, by a pf mbuf header, or by a pf state key.
OK visa@
|
|
regression with iked(8).
Reported by Mark Patruck.
|
|
ok claudio@
|
|
crosshairs.
|
|
this change adds a pf_state_lock rw-lock, which protects consistency
of state table in PF. The code delivered in this change is guarded
by 'WITH_PF_LOCK', which is still undefined. People, who are willing
to experiment and want to run it must do two things:
- compile kernel with -DWITH_PF_LOCK
- bump NET_TASKQ from 1 to ... sky is the limit,
(just select some sensible value for number of tasks your
system is able to handle)
OK bluhm@
|
|
so we can let go if_cloners_lock.
OK tb@, claudio@, bluhm@, kn@, henning@
|
|
put the algorithm into a new function m_calchdrlen(). Also set an
uninitialized m_len to 0 in NFS code.
OK claudio@
|