diff options
author | Yoshii Takashi | 2012-03-14 16:14:43 +0900 |
---|---|---|
committer | Paul Mundt | 2012-03-28 14:26:05 +0900 |
commit | 49d4bcaddca977fffdea8b0b71f6e5da96dac78e (patch) | |
tree | 085210b862c8e3a52fb0869e86180fc6da1030d6 /drivers/tty | |
parent | ffe0e190f67f23ecf57aae68888860e69ac0d52e (diff) |
serial: sh-sci: fix a race of DMA submit_tx on transfer
When DMA is enabled, sh-sci transfer begins with
uart_start()
sci_start_tx()
if (cookie_tx < 0) schedule_work()
Then, starts DMA when wq scheduled, -- (A)
process_one_work()
work_fn_rx()
cookie_tx = desc->submit_tx()
And finishes when DMA transfer ends, -- (B)
sci_dma_tx_complete()
async_tx_ack()
cookie_tx = -EINVAL
(possible another schedule_work())
This A to B sequence is not reentrant, since controlling variables
(for example, cookie_tx above) are not queues nor lists. So, they
must be invoked as A B A B..., otherwise results in kernel crash.
To ensure the sequence, sci_start_tx() seems to test if cookie_tx < 0
(represents "not used") to call schedule_work().
But cookie_tx will not be set (to a cookie, also means "used") until
in the middle of work queue scheduled function work_fn_tx().
This gap between the test and set allows the breakage of the sequence
under the very frequently call of uart_start().
Another gap between async_tx_ack() and another schedule_work() results
in the same issue, too.
This patch introduces a new condition "cookie_tx == 0" just to mark
it is "busy" and assign it within spin-locked region to fill the gaps.
Signed-off-by: Takashi Yoshii <takashi.yoshii.zj@renesas.com>
Reviewed-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Cc: stable@vger.kernel.org
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Diffstat (limited to 'drivers/tty')
-rw-r--r-- | drivers/tty/serial/sh-sci.c | 15 |
1 files changed, 10 insertions, 5 deletions
diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c index 61b7fd2729cd..9ec6d4e7d9e5 100644 --- a/drivers/tty/serial/sh-sci.c +++ b/drivers/tty/serial/sh-sci.c @@ -1229,17 +1229,20 @@ static void sci_dma_tx_complete(void *arg) port->icount.tx += sg_dma_len(&s->sg_tx); async_tx_ack(s->desc_tx); - s->cookie_tx = -EINVAL; s->desc_tx = NULL; if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS) uart_write_wakeup(port); if (!uart_circ_empty(xmit)) { + s->cookie_tx = 0; schedule_work(&s->work_tx); - } else if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { - u16 ctrl = sci_in(port, SCSCR); - sci_out(port, SCSCR, ctrl & ~SCSCR_TIE); + } else { + s->cookie_tx = -EINVAL; + if (port->type == PORT_SCIFA || port->type == PORT_SCIFB) { + u16 ctrl = sci_in(port, SCSCR); + sci_out(port, SCSCR, ctrl & ~SCSCR_TIE); + } } spin_unlock_irqrestore(&port->lock, flags); @@ -1501,8 +1504,10 @@ static void sci_start_tx(struct uart_port *port) } if (s->chan_tx && !uart_circ_empty(&s->port.state->xmit) && - s->cookie_tx < 0) + s->cookie_tx < 0) { + s->cookie_tx = 0; schedule_work(&s->work_tx); + } #endif if (!s->chan_tx || port->type == PORT_SCIFA || port->type == PORT_SCIFB) { |