Age | Commit message (Collapse) | Author |
|
This eliminates a forced context switch to the idle proc. In addition,
sched_exit() no longer needs to sum proc runtime because mi_switch()
will do it.
OK mpi@ a while ago
|
|
added to the runqueue of a CPU.
This fix out-of-sync cases when the priority of a thread wasn't reflecting
the runqueue it was sitting in leading to unnecessary context switch.
ok visa@
|
|
it larger than RC4STATE. A long discussion ensued. In conclusion all
entropy inputs are either satisfactory enough, or just as shitty at 512.
|
|
The lookup in uvm_map_inentry_fix() is already serialized by the
vm_map_lock and such lookup is already executed w/o the KERNEL_LOCK().
ok kettenis@, deraadt@
|
|
|
|
|
|
Save the PC after checking if it belongs to the kernel.
|
|
This makes db_save_stack_trace() and db_stack_dump() work.
ok deraadt@, kettenis@
|
|
that platform have been trickling in bit by bit. One of those
changes unfortunately introduced a regression in cache flushes. The
check for the length in the cache-flush-loop was changed from the
instruction bpl to bhi. This has the effect that it does not branch
on zero anymore. Due to the length decrement at the beginning of
the function, which was not removed, a length of (n * cacheline) + 1
means that the loop misses one run! This means it is possible that
the last byte of a DMA transfer was incorrect, as one could see on
network packets often enough. Remove that instruction, which makes
it even more similar to the OpenBSD/arm64 code.
ok deraadt@
|
|
This helps a bit in situations where a single AP is used and background scans
are causing packet loss, as seen with Jesper Wellin's Broadcom-based AP and my
Android phone in hotspot mode. This is not a proper fix but our background scan
frequency against a single AP was much higher than needed anyway.
Tested by jan, job, benno, Tracey Emery, Jesper Wallin
|
|
From Hans de Goede
4d5307c099afc9ce5fe89e8acf9b3c65104d0e08 in linux 4.19.y/4.19.81
984d7a929ad68b7be9990fc9c5cfa5d5c9fc7942 in mainline linux
|
|
From Thomas Hellstrom
11377c3e997eca9c9ff562fc4fc7a41a455bddf6 in linux 4.19.y/4.19.81
941f2f72dbbe0cf8c2d6e0b180a8021a0ec477fa in mainline linux
|
|
From Kai-Heng Feng
33af2a8ee304ee2deb618eebb534b52ce166467f in linux 4.19.y/4.19.81
11bcf5f78905b90baae8fb01e16650664ed0cb00 in mainline linux
|
|
From Alex Deucher
0933b0db7fb239be01270b25bf73884870d8c1e6 in linux 4.19.y/4.19.81
8d13c187c42e110625d60094668a8f778c092879 in mainline linux
|
|
|
|
|
|
non-cloned devices. Combine spec_close() and spec_close_clone() to avoid
code duplication.
OK mpi@
|
|
|
|
|
|
|
|
section, which has grown a fair bit with the introduction of retguard.
Mortimer discovered the repeated 512-byte sequence as retguard keys, and
this resolves the issue. (Chacha does not fit on the media, so 1.5K early
drop RC4 is hopefully sufficient in our KARL link universe)
Version crank the bootblocks. sysupgrade -s will install new bootblocks.
ok djm mortimer
|
|
ok djm mortimer
|
|
shared memory segment. Otherwise, if copyin ends up sleeping it allows
another thread to remove the same segment leading to a use-after-free.
Feedback from kettenis@ and ok guenther@
Reported-by: syzbot+0de42c2e600a6dd3091d@syzkaller.appspotmail.com
|
|
Patch by Imre Vadasz.
Cross-check and pcireg.h tweak by kettenis@
ok patrick@
|
|
Patch by Imre Vadasz
ok patrick@
|
|
be generated by version 17 firmware. While at it, declare all known
firmware command groups and all PHY_OPS subcomand ids.
Patch by Imre Vadasz, with tweaks by me
ok patrick@
|
|
All supported firmware versions support IWM_UCODE_TLV_FLAGS_TIME_EVENT_API_V2.
Rename struct iwm_time_event_cmd_v2 to iwm_time_event_cmd, and remove helper
functions for converting from V2 API structs to V1 versions.
Patch by Imre Vadasz
ok patrick@
|
|
IWM_UCODE_TLV_FLAGS_PM_CMD_SUPPORT
IWM_UCODE_TLV_FLAGS_NEWBT_COEX
IWM_UCODE_TLV_FLAGS_BF_UPDATED
IWM_UCODE_TLV_FLAGS_D3_CONTINUITY_API
IWM_UCODE_TLV_FLAGS_STA_KEY_CMD
IWM_UCODE_TLV_FLAGS_DEVICE_PS_CMD
IWM_UCODE_TLV_FLAGS_SCHED_SCAN
All supported firmware versions have these flags set.
Patch by Imre Vadasz
ok patrick@
|
|
All supported firmware versions have this feature flag set.
Remove now unneeded iwm_calc_rssi() function.
Patch by Imre Vadasz.
ok patrick@
|
|
It is only required for devices connected via SDIO which we do not support.
Patch by Imre Vadasz
ok patrick@
|
|
Aware Regulatory (LAR) mode and LAR is disabled according to NVM.
Patch by Imre Vadasz.
ok patrick@
|
|
Patch by Imre Vadasz.
Matches Linux commit 176aa60bf148b5af4209ac323cef941dee76e390 by Sara Sharon.
ok patrick@
|
|
which had a value different from the IWL_DEFAULT_MAX_TX_POWER
constant in Linux iwlwifi.
Patch by Imre Vadasz.
ok patrick@
|
|
Patch by Imre Vadasz.
ok patrick@
|
|
ok patrick@
|
|
ok patrick@
|
|
ok guenther@, mpi@
|
|
to connect, e.g. due to a timeout, we will switch the state to SCAN.
Unfortunately this skips clearing the active channel set, which
means that on a scan on all bands only the nodes on the active
channel set, which is defined by whatever node we tried to connect
to, are allowed and all other APs are ignored. Fix this by properly
calling begin_scan(). When we fail to connect and start a scan,
make sure to let the chip know that we don't want to associate
anymore.
Another issue existed when we interrupt a scan, for instance by
setting a new nwid or wpakey. In this case we didn't abort the
scan and started a new scan while the old one as still active.
This could lead to a SCAN -> SCAN transition loop.
Remove the "set ssid" event, since this would be an event in
addition to a failed auth/assoc event, which would make us try
to handle failure twice.
Discussed with and ok stsp@
|
|
64-bit unsigned arithmetic.
|
|
|
|
it's a bit shorter, and a bit more correct wrt use of bus_dma. still
a bit to go though.
|
|
ok jsg@, patrick@
|
|
ok jsg@
|
|
|
|
if gen is toggled per packet, then it needs to be toggled before
each packet, not before the loop. also, if 0 out the right offload.
brad pointed out the if 0 bit.
|
|
|
|
to make mpsafetey a bit easier to figure out i disabled checksum
and vlan offload. i'll put them back in soon though.
|
|
vmx "hardware" seems to be able to use rx descriptors as soon as
theyre filled in, which means filling the ring from a timeout can
run conccurently with an isr that's pulling stuff off the ring.
this is mostly a problem with the rxr accounting, so we serialise
updates to the alive counter by running rxfill in a mutex.
most of the investigation was done by claudio@ and mathieu@
an earlier version of this diff was tested hard by mathieu@ and was
ok@ claudio
|
|
Currently we return (1000000000 / hz) from clock_getres(2) as the
resolution for every clock. This is often untrue.
For CPUTIME clocks, if we have a separate statclock interrupt the
resolution is (1000000000 / stathz). Otherwise it is as we currently
claim: (1000000000 / hz).
For the REALTIME/MONOTONIC/UPTIME/BOOTTIME clocks the resolution is
that of the active timecounter. During tc_init() we can compute the
precision of a timecounter by examining its tc_counter_mask and store
it for lookup later in a new member, tc_precision. The resolution of
a clock backed by a timecounter "tc" is then
tc.tc_precision * (2^64 / tc.tc_frequency)
fractional seconds.
While here we can clean up sys_clock_getres() a bit.
Standards input from guenther@. Lots of input, feedback from
kettenis@.
ok kettenis@
|
|
|