Commit f1a2a9b6 authored by David Miller's avatar David Miller Committed by Dave Airlie
Browse files

drm: Preserve SHMLBA bits in hash key for _DRM_SHM mappings.

Platforms such as sparc64 have D-cache aliasing issues.  We
cannot allow virtual mappings in different contexts to be such
that two cache lines can be loaded for the same backing data.
Updates to one cache line won't be seen by accesses to the other
cache line.

Code in sparc64 and other architectures solve this problem by
making sure that all userland mappings of MAP_SHARED objects have
the same virtual address base.  They implement this by keying
off of the page offset, and using that to choose a suitably
consistent virtual address for mmap() requests.

Making things even worse, getting this wrong on sparc64 can result
in hangs during DRM lock acquisition.  This is because, at least on
UltraSPARC-III, normal loads consult the D-cache but atomics such
as 'cas' (which is what cmpxchg() is implement using) only consult
the L2 cache.  So if a D-cache alias is inserted, the load can
see different data than the atomic, and we'll loop forever because
the atomic compare-and-exchange will never complete successfully.

So to make this all work properly, we need to make sure that the
hash address computed by drm_map_handle() preserves the SHMLBA
relevant bits, and that's what this patch does for _DRM_SHM mappings.

As a historical note, many years ago this bug didn't exist because we
used to just use the low 32-bits of the address as the hash and just
hope for the best.  This preserved the SHMLBA bits properly.  But when
the hashtab code was added to DRM, this was no longer the case.
Signed-off-by: default avatarDavid S. Miller <>
Signed-off-by: default avatarDave Airlie <>
parent d30333bb
......@@ -34,6 +34,8 @@
#include <linux/vmalloc.h>
#include <linux/log2.h>
#include <asm/shmparam.h>
#include "drmP.h"
resource_size_t drm_get_resource_start(struct drm_device *dev, unsigned int resource)
......@@ -83,9 +85,11 @@ static struct drm_map_list *drm_find_matching_map(struct drm_device *dev,
static int drm_map_handle(struct drm_device *dev, struct drm_hash_item *hash,
unsigned long user_token, int hashed_handle)
unsigned long user_token, int hashed_handle, int shm)
int use_hashed_handle;
int use_hashed_handle, shift;
unsigned long add;
#if (BITS_PER_LONG == 64)
use_hashed_handle = ((user_token & 0xFFFFFFFF00000000UL) || hashed_handle);
#elif (BITS_PER_LONG == 32)
......@@ -101,9 +105,31 @@ static int drm_map_handle(struct drm_device *dev, struct drm_hash_item *hash,
if (ret != -EINVAL)
return ret;
shift = 0;
if (shm && (SHMLBA > PAGE_SIZE)) {
int bits = ilog2(SHMLBA >> PAGE_SHIFT) + 1;
/* For shared memory, we have to preserve the SHMLBA
* bits of the eventual vma->vm_pgoff value during
* mmap(). Otherwise we run into cache aliasing problems
* on some platforms. On these platforms, the pgoff of
* a mmap() request is used to pick a suitable virtual
* address for the mmap() region such that it will not
* cause cache aliasing problems.
* Therefore, make sure the SHMLBA relevant bits of the
* hash value we use are equal to those in the original
* kernel virtual address.
shift = bits;
add |= ((user_token >> PAGE_SHIFT) & ((1UL << bits) - 1UL));
return drm_ht_just_insert_please(&dev->map_hash, hash,
user_token, 32 - PAGE_SHIFT - 3,
shift, add);
......@@ -323,7 +349,8 @@ static int drm_addmap_core(struct drm_device * dev, resource_size_t offset,
/* We do it here so that dev->struct_mutex protects the increment */
user_token = (map->type == _DRM_SHM) ? (unsigned long)map->handle :
ret = drm_map_handle(dev, &list->hash, user_token, 0);
ret = drm_map_handle(dev, &list->hash, user_token, 0,
(map->type == _DRM_SHM));
if (ret) {
if (map->type == _DRM_REGISTERS)
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment