Age | Commit message (Collapse) | Author |
|
|
|
other composite clocks. With this we can get the frequency for the OCOTP.
|
|
|
|
ok kettenis@
|
|
ok patrick@
|
|
ok kettenis@
|
|
process.
ok kettenis@ as part of a larger diff
|
|
ok kettenis@
|
|
ok kettenis@
|
|
ok kettenis@
|
|
ok kettenis@
|
|
ok kettenis@
|
|
Due to unstable of PLL1, sometimes the system has hanged up
especially at boot. This is observed at Allwinner H3/H2+ processor.
To solve the problem, PLL1 setting procedure is same as Linux.
1. change clock source to 24MHz
2. wait 1usec (new)
3. disable PLL1 (new)
4. set new NKMP value, but M should be 1
5. re-enable PLL1 (new)
6. wait PLL1 stable (modified)
7. change clock source to PLL1
8. wait 1usec (new)
Once disable PLL1 before setting NKMP is very important. And, sometimes
LOCK flag is set even if PLL has not locked yet so wait for PLL is
modified with simple delay() by the value of PLL_STABLE_TIME_REG1 register.
Not only Allwinner H3/H2+ but also all (i.e. A64) Allwinner processors
datasheet has "If the clock source is changed, at most to wait for 8
present running clock cycles." sentence at CPU clock source selection
field of CPU/AXI configuration register. But this is ambiguous that
_who_ should do _what_ during that cycles.
It is unclear that changing clock source itself invoke PLL1 unstability.
For safety, added 1usec wait after changing clock source like Linux.
ok by kettenis@, thanks to adr at sdf dot org
|
|
reliably.
|
|
the code configuring the parser, we do not yet add the proper multicast
filters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
register defines.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
performance at 200 MHz, so restrict the maximum frequency to 150 MHz for now.
This also makes the eMMC on the ODROID-C4 work properly.
|
|
|
|
|
|
|
|
prs entry there and also save a few lines.
|
|
entry, saving at least one unnecessary malloc(9)/free(9) cycle.
|
|
|
|
|
|
|
|
|
|
have stored the struct cpu_info * in the wrapper around the interrupt
handler cookie, but since we can have a few layers inbetween, this does
not seem very nice. Instead have each and every interrupt controller
provide a barrier function. This means that intr_barrier(9) will in the
end be executed by the interrupt controller that actually wired the pin
to a core. And that's the only place where the information is stored.
ok kettenis@
|
|
mtx ...: locking against myself" on Orange Pi Zero.
Analysis by patrick@:
"The thermal sensor framework uses its own taskq with IPL_SOFTCLOCK.
sxitemp(4) calls thermal_sensor_update() from interrupt context, and
sxitemp(4) is using IPL_VM (memory allocation?!) for its interrupt.
IPL_VM is obviously higher than IPL_SOFTCLOCK, so it ends up being able
to interrupt the taskq. Even though we're in msleep_nsec, I think we
have *not yet* given up the mutex, that we are holding while looking for
more work, only releasing it while sleeping.
Thus, the interrupt runs task_add(), which tries to grab the taskq's
mutex, even though the taskq already holds it!"
ok patrick@ kettenis@
|
|
in the chipset tag for establishing interrupts now takes a struct cpu_info *.
The normal pci_intr_establish() macro passes NULL as ci, which indicates that
the primary CPU is to be used.
The PCI controller drivers can then simply pass the ci on to our arm64/armv7
interrupt establish "framework".
Prompted by dlg@
ok kettenis@
|
|
a struct cpu_info *. From a driver point of view the fdt_intr_establish_*
API now also exist same functions with a *_cpu suffix. Internally the
"old" functions now call their *_cpu counterparts, passing NULL as ci.
NULL will be interpreted as primary CPU in the interrupt controller code.
The internal framework for interrupt controllers has been changed so that
the establish methods provided by an interrupt controller function always
takes a struct cpu_info *.
Some drivers, like imxgpio(4) and rkgpio(4), only have a single interrupt
line for multiple pins. On those we simply disallow trying to establish
an interrupt on a non-primary CPU, returning NULL.
Since we do not have MP yet on armv7, all armv7 interrupt controllers do
return NULL if an attempt is made to establish an interrupt on a different
CPU. That said, so far there's no way this can happen. If we ever gain
MP support, this is a reminder that the interrupt controller drivers have
to be adjusted.
Prompted by dlg@
ok kettenis@
|
|
ok dlg@ tobhe@
|
|
|
|
|
|
Manager Pools. Typically there's supposed to be long and a short
pool, for different sizes of packets. Those pools are filled with
empty mbufs, by giving the hardware the physical address and some
cookie. On RX, it will return us the address and the cookie, so
that we can look up which mbuf that has been. Since I cannot be
sure we always get the buffers in the order they've been put in,
there could be holes in the list of RX buffers. Thus we keep a
freelist where we record all cookies for buffers that we have not
yet re-filled. By using pool per core, doing RX refill management
should be easier once we try to work with more queues. Also keep
note that a single mvpp(4) controller can have up to 3 ports, so
that means the individual ports are going to share RX buffer pools.
|
|
receive interrupts for the physical TX queues, but the TX buffers
which need to be freed were taken from the aggregated (per core) TX
queue. This means we probably should have the physical TX queues
tied to specific cores, so that the TX enqueue and TX completion
share the same per-core info for the free-handling. For now we only
have a single physical and aggregated TX queue, so it's comparatively
easy.
|