]> git.itanic.dy.fi Git - linux-stable/commit
net/mlx5: DR, Cache STE shadow memory
authorYevgeny Kliteynik <kliteyn@nvidia.com>
Thu, 23 Dec 2021 23:07:30 +0000 (01:07 +0200)
committerSaeed Mahameed <saeedm@nvidia.com>
Thu, 24 Feb 2022 00:08:09 +0000 (16:08 -0800)
commite5b2bc30c21139ae10f0e56989389d0bc7b7b1d6
tree1b36a8854ade3511a2cc9cea9cf3374bfaa40938
parentf908a35b22180c4da64cf2647e4f5f0cd3054da7
net/mlx5: DR, Cache STE shadow memory

During rule insertion on each ICM memory chunk we also allocate shadow memory
used for management. This includes the hw_ste, dr_ste and miss list per entry.
Since the scale of these allocations is large we noticed a performance hiccup
that happens once malloc and free are stressed.
In extreme usecases when ~1M chunks are freed at once, it might take up to 40
seconds to complete this, up to the point the kernel sees this as self-detected
stall on CPU:

 rcu: INFO: rcu_sched self-detected stall on CPU

To resolve this we will increase the reuse of shadow memory.
Doing this we see that a time in the aforementioned usecase dropped from ~40
seconds to ~8-10 seconds.

Fixes: 29cf8febd185 ("net/mlx5: DR, ICM pool memory allocator")
Signed-off-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
drivers/net/ethernet/mellanox/mlx5/core/steering/mlx5dr.h