Last commit made on 2020-04-09
Get this branch:
git clone -b lrg/topic/buffer-locking https://git.launchpad.net/~canonical-hwe-team/hwe-next/+git/sof

Branch merges

Branch information


Recent commits

72c3f53... by Liam Girdwood <email address hidden> on 2020-04-09

buffer: locking - only lock on buffer status updates.

WIP. Buffers should only be locked when changing buffer status.

The pipeline rules are simple
1) Buffers only have 1 reader.
2) Buffers only have 1 writer.
3) Readers and writers can exist on different cores and run concurrently.

These rules mean there is no need to lock the whole buffer per component
for R or W access. Only one user updates the r_ptr and only one user
updates the w_ptr. The locking should only take place when we update
buffer W/R positions and hence the shared free/avail data sizes.

Reading the current buffer free/avail status does NOT need locked. The
free/avail sizes may change after reading (due to concurrent processing)
but will only grow (upto buffer limits). There is nothing stopping a
component re-reading free/avail during copy if it wants to speculatively
process more data.

Signed-off-by: Liam Girdwood <email address hidden>

5f7d4e2... by Daniel Baluta <email address hidden> on 2020-04-07

drivers: imx: irqsteer: Fix computation of status

status of an output irqsteer line is a 64bit variable
composed of 2 x 32 bit registers.

Because first 64 output irqsteer lines only holds status for
IRQ in[0] we have a different formula for getting the status
compared to the existing implementation done for i.MX8QXP/i.MX8QM.

Mapping for status register is as follows:

line 0 -> [0 | chan0]
line 1 -> [chan2 | chan1]
line 3 -> [chan4 | chan 3]

Signed-off-by: Daniel Baluta <email address hidden>

b4cce00... by Daniel Baluta <email address hidden> on 2020-04-07

drivers: imx: irqsteer: Add IRQ fixup for mask/unmask

IRQSTEER module on i.MX8MP accepts 160 interrupts in 5
groups of 32 interrupts and steers them to an output of 3 lines.
Each output line has 64 interrupts.

The interesting part here is how the interrupts are steered and
how the registers are mapped.

Each 32 IRQ input group has 1 register for mask/set/status. Lets name
REGn the base of such registers. Then, according to the documentation
we have:

REG0 -> IRQ in[159:128] }
REG1 -> IRQ in[127:96] } => IRQ out[2] [191:128] from in[159:96]
REG2 -> IRQ in[95:64] ]
REG3 -> IRQ in[63:32] ] => IRQ out[1] [127:64] from in[95:32]
REG4 -> IRQ in[31:0] | => IRQ out[0] [63:0] from in[31:0]

Notice that the original IRQ input interrupts are shifted with 32. When
computing the corresponding registers for an output register we need
to subtract 32 to get the correct register. This is achieved using
a fixup function which only applies for i.MX8MP.

Another important observation is that the IRQ in[31:0] is the single
valid group on output line 0. Thus in our represanation leaving a gap
with first 32 interrupts unused.

Signed-off-by: Daniel Baluta <email address hidden>

8aac5d4... by Daniel Baluta <email address hidden> on 2020-04-07

drivers: imx: irqsteer: Reduce irqsteer child nodes

On i.MX8MP irqsteer instance has only 3 output lines. So, reduce
the number of child nodes to 3.

Also, IRQSTEER on i.MX8MP has fewer input interrupts than on

It has 160 input interrupts grouped as 5 lines of 32 interrupts, who
then are mapped on 3 output lines each with 64 interrupts.

The special case here is line 0 which holds only the first 32
interrupts. For this reason, we need to do:
 - start with first child at 0 instead of 32, because interrupts
 starting at 32 should be in a different child cascased
 - fix formula for computing MASK/SET/STATUS registers
 - fix irqs register num.

Signed-off-by: Daniel Baluta <email address hidden>

d626291... by Marc Herbert <email address hidden> on 2020-04-08

CI: Travis: less VM isolation for speed-up. Remove matrix.

Looking at any recent Travis build log:

1. more than half the time is spent in the exact same "docker pull"
2. The qemuboottest stage rebuilds again the exact same thing than the
   previous test stage.

Fix 1. by re-using the same docker instance for multiple platforms.
Fix 2. by dropping from the test stage builds performed again in
the qemuboottest stage.

Random sample before:
  Total (VM) time 1 hr 15 min
  Real time 25 min (depends on current Travis load)
  Total (VM) time 30 min
  Real time 10 min (depends on current Travis load)

The price to pay for this matrix reduction and speed up is coarser
reports in case of failure. Considering these tests are the most basic
possible one expects them to be rarely ever broken.

Remove the top-level matrix expansion as it was becoming impractical for
these heterogeneous builds ("PLATFORM=tools"?!). The combination of the
matrix and YAML anchors was not very obvious. Use YAML anchors

Rename default stage "test" to "buildonly"

Signed-off-by: Marc Herbert <email address hidden>

4223d87... by Sebastiano Carlucci <email address hidden> on 2020-04-08

audio: dcblock: Fix doxygen error for dcblock.h

This commit fixes a doxygen issue caused by a mismatch between
dcblock_find_func()'s declaration and its corresponding comment block.

Signed-off-by: Sebastiano Carlucci <email address hidden>

939c14c... by Marcin Maka <email address hidden> on 2020-04-06

comp: use list_relink in make_shared

Previous version worked only for empty lists.
There is potential case when buffer is already connected to
some local buffer on either end and then connected to a
buffer on another core which calls make_shared expected
to preserve existing links.

Signed-off-by: Marcin Maka <email address hidden>

c9e2f51... by Marcin Maka <email address hidden> on 2020-04-06

list: add relink operation called after the list is moved

Updates next/prev pointers with the new address of the list

Signed-off-by: Marcin Maka <email address hidden>

a00f42b... by Marcin Maka <email address hidden> on 2020-04-06

comp: replace rrealloc with platform method for shared objects

Rrealloc is a very expensive method and may fail for a large
component. There is existing platform specific method to "convert"
local objects into shared ones which is very quick on existing

Signed-off-by: Marcin Maka <email address hidden>

d29ebe8... by Tomasz Lauda <email address hidden> on 2020-04-07

pipeline: atomic schedule of connected pipelines

Implements atomic scheduling of connected pipelines that
supposed to be triggered at the same time. If the trigger
is propagated to the connected pipelines, then the expectation
is that they should be started at the same system tick.
Otherwise it might potentially lead to losing some samples
at the beginning for one of the pipes.

Signed-off-by: Tomasz Lauda <email address hidden>