aboutsummaryrefslogtreecommitdiff
path: root/arch/ia64/uv
diff options
context:
space:
mode:
authorEugeniy Paltsev2018-07-26 16:15:43 +0300
committerVineet Gupta2018-07-27 13:12:45 -0700
commiteb2777397fd83a4a7eaa26984d09d3babb845d2a (patch)
tree45d820fabd2efe38bdcdd073ebaf1ed1ee12cc9b /arch/ia64/uv
parent4c612add7b18844ddd733ebdcbe754520155999b (diff)
ARC: dma [non-IOC] setup SMP_CACHE_BYTES and cache_line_size
As for today we don't setup SMP_CACHE_BYTES and cache_line_size for ARC, so they are set to L1_CACHE_BYTES by default. L1 line length (L1_CACHE_BYTES) might be easily smaller than L2 line (which is usually the case BTW). This breaks code. For example this breaks ethernet infrastructure on HSDK/AXS103 boards with IOC disabled, involving manual cache flushes Functions which alloc and manage sk_buff packet data area rely on SMP_CACHE_BYTES define. In the result we can share last L2 cache line in sk_buff linear packet data area between DMA buffer and some useful data in other structure. So we can lose this data when we invalidate DMA buffer. sk_buff linear packet data area | | | skb->end skb->tail V | | V V ----------------------------------------------. packet data | <tail padding> | <useful data in other struct> ----------------------------------------------. ---------------------.--------------------------------------------------. SLC line | SLC (L2 cache) line (128B) | ---------------------.--------------------------------------------------. ^ ^ | | These cache lines will be invalidated when we invalidate skb linear packet data area before DMA transaction starting. This leads to issues painful to debug as it reproduces only if (sk_buff->end - sk_buff->tail) < SLC_LINE_SIZE and if we have some useful data right after sk_buff->end. Fix that by hardcode SMP_CACHE_BYTES to max line length we may have. Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Diffstat (limited to 'arch/ia64/uv')
0 files changed, 0 insertions, 0 deletions