Commit 197888d7 authored by Charlie Jacobsen's avatar Charlie Jacobsen Committed by Vikram Narayanan

Installs PMFS filesystem.

This was done by applying pmfs.patch from our version
of the PMFS git repository (currently here -
https://gitlab.flux.utah.edu/xcap/pmfs), and squashing
all of the commits from the patch into this one.

pmfs.patch was generated by essentially doing a diff
on the v3.10-compatible PMFS (the head of the master
branch in our repo) and linux v3.10 (tagged in our
repo as linux-v3.10). There were a
few minor patch conflicts I had to manually fix
in the patch. Those changes are reflected in the
pmfs.patch that is in our repo.
parent 6e78ea76
PMFS Introduction
=================
PMFS is a file system for persistent memory. The file system is optimized to be
lightweight and efficient in providing access to persistent memory that is
directly accessible via CPU load/store instructions. It manages the persistent
memory directly and avoids the block driver layer and page cache layer and thus
provides synchronous reads and writes to persistent area. It supports all the
existing POSIX style file system APIs so that the applications need not be
modified to use this file system. In addition, PMFS provides support for huge
pages to minimize TLB entry usage and speed up virtual address lookup. PMFS's
mmap interface can map a file's data directly into the process's address space
without any intermediate buffering. This file system has been validated using
DRAM to emulate persistent memory. Hence, PMFS also provides an option to load
the file system from a disk-based file into memory during mount and save the
file system from memory into the disk-based file during unmount. PMFS also
guarantees consistent and durable updates to the file system meta-data against
arbitrary system and power failures. PMFS uses journaling (undo log) to provide
consistent updates to meta-data.
Configuring PMFS
================
PMFS uses a physically contiguous area of DRAM (which is not used by the
kernel) as the file system space. To make sure that the kernel doesn't use a
certain contiguous physical memory area you can boot the kernel with 'memmap'
kernel command line option. For more information on this, please see
Documentation/kernel-parameters.txt.
For example, adding 'memmap=2G$4G' to the kernel boot parameters will reserve
2G of memory, starting at 4G. (You may have to escape the $ so it isn't
interpreted by GRUB 2, if you use that as your boot loader.)
After the OS has booted, you can initialize PMFS during mount command by
passing 'init=' mount option.
For example,
#mount -t pmfs -o physaddr=0x100000000,init=2G none /mnt/pmfs
The above command will create a PMFS file system in the 2GB region starting at
0x100000000 (4GB) and mount it at /mnt/pmfs. There are many other mount time
options supported by pmfs. Some of the main options include:
wprotect: This option protects pmfs from stray writes (e.g., because of kernel
bugs). It makes sure that the file system is mapped read-only into the kernel
and makes it writable only for a brief period when writing to it. (EXPERIMENTAL
- Use with Caution).
jsize: This option specifies the journal size. Default is 4MB.
hugemmap: This option enables support for using huge pages in memory-mapped
files.
backing: This option specifies a disk based file which should be used as a
persistent backing store for pmfs during mount and unmount.
#mount -t pmfs -o physaddr=0x100000000,init=2G,backing="/data/pmfs.img" none /mnt/pmfs
The above example initializes a 2GB PMFS and during unmount it saves the file
system into a file /data/pmfs.img
#mount -t pmfs -o physaddr=0x100000000,backing="/data/pmfs.img" none /mnt/pmfs
The above example loads the PMFS from /data/pmfs.img during mount and saves
the file system to /data/pmfs.img during unmount.
backing_opt: This option specifies how the backing file should be used. It can
have 2 values;
1: This value means that PMFS will not be loaded from the backing file during
mount. It is either created using 'init=' option, or the pre-existing file
system in the memory is used.
2: This value means that the PMFS will not be stored to the backing file during
unmount.
If backing_opt is not specified, PMFS will load the file system from backing
file (if init= option is not specified) during mount and store the file system
to the backing file during unmount.
#mount -t pmfs -o physaddr=0x100000000,backing="/data/pmfs.img",backing_opt=2 none /mnt/pmfs
The above example loads the PMFS from /data/pmfs.img during mount but does not
save the file system to /data/pmfs.img during unmount.
#mount -t pmfs -o physaddr=0x100000000,backing="/data/pmfs.img",backing_opt=1 none /mnt/pmfs
The above example assumes that there is a PMFS already present at the specified
physical address (create during an earlier mount). It uses that same PMFS
instead of loading it from /data/pmfs.img. It, however, saves the file system
to /data/pmfs.img during unmount.
For full list of options, please refer to the source code.
Using Huge Pages with PMFS
==========================
PMFS supports the use of huge-pages through the fallocate(), and ftruncate()
system calls. These functions set the file size and also provide PMFS with a
hint about what data-block size to use (fallocate() also pre-allocates the
data-blocks). For example, if we set the file size below 2MB, 4KB blocksize is
used. If we set the file size between >= 2MB but < 1GB, 2MB block size is used,
and if we set the file size >= 1GB, 1GB block-size is used. fallocate() or
ftruncate() should be called on a empty file (size 0) for the block-size hint
to be applied properly. So, a good way to use Huge Pages in PMFS is to open a
new file through the open() system call, and call fallocate() or ftruncate() to
set the file size and block-size hint. Remember, that it is only a hint, so if
PMFS can't find enough free blocks of a particular size, it will try to use
smaller block-size. If the block-size hint is not set, default 4KB block-size
will be used for file's data-blocks.
Current Limitations
===================
a) PMFS uses a memory region not used by the kernel. Hence the memory needs to
be reserved by using the memmap= option or using BIOS ACPI tables.
b) Because of multiple blocksize support, PMFS supports multiple max file
sizes. For example, if the file's block size is 4KB, the file can grow up to
512 GB in size, if blocksize is 2MB, file can grow up to 256 TB, and if the
blocksize is 1GB, the file can grow up to 128 PB.
c) PMFS does not currently support extended attributes.
d) PMFS currently only works with x86_64 kernels.
Contributing
============
Please send bug reports/comments/feedback to the PMFS development
list: linux-pmfs@lists.infradead.org
You are also encouraged to subscribe to the mailing list
by sending an email with the Subject line 'subscribe' to
linux-pmfs-request@lists.infradead.org.
We prefer pull requests as patches sent to the mailing list.
Also feel free to join us on the IRC channel #pmfs on irc.oftc.net.
PMFS Introduction
=================
PMFS is a file system for persistent memory. The file system is optimized to be
lightweight and efficient in providing access to persistent memory that is
directly accessible via CPU load/store instructions. It manages the persistent
memory directly and avoids the block driver layer and page cache layer and thus
provides synchronous reads and writes to persistent area. It supports all the
existing POSIX style file system APIs so that the applications need not be
modified to use this file system. In addition, PMFS provides support for huge
pages to minimize TLB entry usage and speed up virtual address lookup. PMFS's
mmap interface can map a file's data directly into the process's address space
without any intermediate buffering. This file system has been validated using
DRAM to emulate persistent memory. Hence, PMFS also provides an option to load
the file system from a disk-based file into memory during mount and save the
file system from memory into the disk-based file during unmount. PMFS also
guarantees consistent and durable updates to the file system meta-data against
arbitrary system and power failures. PMFS uses journaling (undo log) to provide
consistent updates to meta-data.
Configuring PMFS
================
PMFS uses a physically contiguous area of DRAM (which is not used by the
kernel) as the file system space. To make sure that the kernel doesn't use a
certain contiguous physical memory area you can boot the kernel with 'memmap'
kernel command line option. For more information on this, please see
Documentation/kernel-parameters.txt.
For example, adding 'memmap=2G$4G' to the kernel boot parameters will reserve
2G of memory, starting at 4G. (You may have to escape the $ so it isn't
interpreted by GRUB 2, if you use that as your boot loader.)
After the OS has booted, you can initialize PMFS during mount command by
passing 'init=' mount option.
For example,
<pre>#mount -t pmfs -o physaddr=0x100000000,init=2G none /mnt/pmfs</pre>
The above command will create a PMFS file system in the 2GB region starting at
0x100000000 (4GB) and mount it at /mnt/pmfs. There are many other mount time
options supported by pmfs. Some of the main options include:
wprotect: This option protects pmfs from stray writes (e.g., because of kernel
bugs). It makes sure that the file system is mapped read-only into the kernel
and makes it writable only for a brief period when writing to it. (EXPERIMENTAL,
Use with Caution).
jsize: This option specifies the journal size. Default is 4MB.
hugemmap: This option enables support for using huge pages in memory-mapped
files.
backing: This option specifies a disk based file which should be used as a
persistent backing store for pmfs during mount and unmount.
<pre>#mount -t pmfs -o physaddr=0x100000000,init=2G,backing="/data/pmfs.img" none /mnt/pmfs</pre>
The above example initializes a 2GB PMFS and during unmount it saves the file
system into a file /data/pmfs.img
<pre>#mount -t pmfs -o physaddr=0x100000000,backing="/data/pmfs.img" none /mnt/pmfs</pre>
The above example loads the PMFS from /data/pmfs.img during mount and saves
the file system to /data/pmfs.img during unmount.
backing_opt: This option specifies how the backing file should be used. It can
have 2 values;
1: This value means that PMFS will not be loaded from the backing file during
mount. It is either created using 'init=' option, or the pre-existing file
system in the memory is used.
2: This value means that the PMFS will not be stored to the backing file during
unmount.
If backing_opt is not specified, PMFS will load the file system from backing
file (if init= option is not specified) during mount and store the file system
to the backing file during unmount.
<pre>#mount -t pmfs -o physaddr=0x100000000,backing="/data/pmfs.img",backing_opt=2 none /mnt/pmfs</pre>
The above example loads the PMFS from /data/pmfs.img during mount but does not
save the file system to /data/pmfs.img during unmount.
<pre>#mount -t pmfs -o physaddr=0x100000000,backing="/data/pmfs.img",backing_opt=1 none /mnt/pmfs</pre>
The above example assumes that there is a PMFS already present at the specified
physical address (create during an earlier mount). It uses that same PMFS
instead of loading it from /data/pmfs.img. It, however, saves the file system
to /data/pmfs.img during unmount.
For full list of options, please refer to the source code.
Using Huge Pages with PMFS
==========================
PMFS supports the use of huge-pages through the fallocate(), and ftruncate()
system calls. These functions set the file size and also provide PMFS with a
hint about what data-block size to use (fallocate() also pre-allocates the
data-blocks). For example, if we set the file size below 2MB, 4KB blocksize is
used. If we set the file size between >= 2MB but < 1GB, 2MB block size is used,
and if we set the file size >= 1GB, 1GB block-size is used. fallocate() or
ftruncate() should be called on a empty file (size 0) for the block-size hint
to be applied properly. So, a good way to use Huge Pages in PMFS is to open a
new file through the open() system call, and call fallocate() or ftruncate() to
set the file size and block-size hint. Remember, that it is only a hint, so if
PMFS can't find enough free blocks of a particular size, it will try to use
smaller block-size. If the block-size hint is not set, default 4KB block-size
will be used for file's data-blocks.
Current Limitations
===================
* PMFS uses a memory region not used by the kernel. Hence the memory needs to
be reserved by using the memmap= option or using BIOS ACPI tables.
* Because of multiple blocksize support, PMFS supports multiple max file
sizes. For example, if the file's block size is 4KB, the file can grow up to
512 GB in size, if blocksize is 2MB, file can grow up to 256 TB, and if the
blocksize is 1GB, the file can grow up to 128 PB.
* PMFS does not currently support extended attributes.
* PMFS currently only works with x86_64 kernels.
Contributing
============
Please send bug reports/comments/feedback to the PMFS development
list: linux-pmfs@lists.infradead.org
You are also encouraged to subscribe to the mailing list
by sending an email with the Subject line **subscribe** to
linux-pmfs-request@lists.infradead.org.
We prefer pull requests as patches sent to the mailing list.
Also feel free to join us on the IRC channel #pmfs on irc.oftc.net.
......@@ -175,6 +175,9 @@ config USER_RETURN_NOTIFIER
config HAVE_IOREMAP_PROT
bool
config HAVE_SET_MEMORY_RO
bool
config HAVE_KPROBES
bool
......
......@@ -21,6 +21,7 @@ config X86
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ANON_INODES
select ARCH_CLOCKSOURCE_DATA
select HAVE_SET_MEMORY_RO
select ARCH_DISCARD_MEMBLOCK
select ARCH_HAS_ACPI_TABLE_UPGRADE if ACPI
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
......
......@@ -183,9 +183,15 @@ extern void __iomem *ioremap_uc(resource_size_t offset, unsigned long size);
#define ioremap_uc ioremap_uc
extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size);
extern void __iomem * ioremap_cache_ro(resource_size_t offset,
unsigned long size);
extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size,
unsigned long prot_val);
extern void __iomem *
ioremap_hpage_cache_ro(resource_size_t phys_addr, unsigned long size);
extern void __iomem *
ioremap_hpage_cache(resource_size_t phys_addr, unsigned long size);
/*
* The default ioremap() behavior is non-cached:
*/
......
......@@ -203,6 +203,7 @@ enum page_cache_mode {
#define PAGE_KERNEL_IO __pgprot(__PAGE_KERNEL_IO)
#define PAGE_KERNEL_IO_NOCACHE __pgprot(__PAGE_KERNEL_IO_NOCACHE)
#define PAGE_KERNEL_IO_LARGE __pgprot(__PAGE_KERNEL_IO | _PAGE_PSE)
/* xwr */
#define __P000 PAGE_NONE
......
......@@ -15,6 +15,7 @@
#include <linux/random.h>
#include <linux/uaccess.h>
#include <linux/elf.h>
#include <linux/export.h>
#include <asm/ia32.h>
#include <asm/syscalls.h>
......@@ -214,3 +215,152 @@ bottomup:
*/
return arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
}
static unsigned long arch_get_unmapped_area_bottomup_sz(struct file *file,
unsigned long addr, unsigned long len, unsigned long align_size,
unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
unsigned long start_addr;
if (len > mm->cached_hole_size) {
start_addr = mm->free_area_cache;
} else {
start_addr = TASK_UNMAPPED_BASE;
mm->cached_hole_size = 0;
}
full_search:
addr = ALIGN(start_addr, align_size);
for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
/* At this point: (!vma || addr < vma->vm_end). */
if (TASK_SIZE - len < addr) {
/*
* Start a new search - just in case we missed
* some holes.
*/
if (start_addr != TASK_UNMAPPED_BASE) {
start_addr = TASK_UNMAPPED_BASE;
mm->cached_hole_size = 0;
goto full_search;
}
return -ENOMEM;
}
if (!vma || addr + len <= vma->vm_start) {
mm->free_area_cache = addr + len;
return addr;
}
if (addr + mm->cached_hole_size < vma->vm_start)
mm->cached_hole_size = vma->vm_start - addr;
addr = ALIGN(vma->vm_end, align_size);
}
}
static unsigned long arch_get_unmapped_area_topdown_sz(struct file *file,
unsigned long addr0, unsigned long len, unsigned long align_size,
unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma, *prev_vma;
unsigned long base = mm->mmap_base, addr = addr0;
unsigned long largest_hole = mm->cached_hole_size;
unsigned long align_mask = ~(align_size - 1);
int first_time = 1;
/* don't allow allocations above current base */
if (mm->free_area_cache > base)
mm->free_area_cache = base;
if (len <= largest_hole) {
largest_hole = 0;
mm->free_area_cache = base;
}
try_again:
/* make sure it can fit in the remaining address space */
if (mm->free_area_cache < len)
goto fail;
/* either no address requested or can't fit in requested address hole */
addr = (mm->free_area_cache - len) & align_mask;
do {
/*
* Lookup failure means no vma is above this address,
* i.e. return with success:
*/
vma = find_vma(mm, addr);
if (!vma)
return addr;
/*
* new region fits between prev_vma->vm_end and
* vma->vm_start, use it:
*/
prev_vma = vma->vm_prev;
if (addr + len <= vma->vm_start &&
(!prev_vma || (addr >= prev_vma->vm_end))) {
/* remember the address as a hint for next time */
mm->cached_hole_size = largest_hole;
return (mm->free_area_cache = addr);
} else {
/* pull free_area_cache down to the first hole */
if (mm->free_area_cache == vma->vm_end) {
mm->free_area_cache = vma->vm_start;
mm->cached_hole_size = largest_hole;
}
}
/* remember the largest hole we saw so far */
if (addr + largest_hole < vma->vm_start)
largest_hole = vma->vm_start - addr;
/* try just below the current vma->vm_start */
addr = (vma->vm_start - len) & align_mask;
} while (len <= vma->vm_start);
fail:
/*
* if hint left us with no space for the requested
* mapping then try again:
*/
if (first_time) {
mm->free_area_cache = base;
largest_hole = 0;
first_time = 0;
goto try_again;
}
/*
* A failed mmap() very likely causes application failure,
* so fall back to the bottom-up function here. This scenario
* can happen with large stack limits and large mmap()
* allocations.
*/
mm->free_area_cache = TASK_UNMAPPED_BASE;
mm->cached_hole_size = ~0UL;
addr = arch_get_unmapped_area_bottomup_sz(file, addr0, len, align_size,
pgoff, flags);
/*
* Restore the topdown base:
*/
mm->free_area_cache = base;
mm->cached_hole_size = ~0UL;
return addr;
}
unsigned long arch_get_unmapped_area_sz(struct file *file,
unsigned long addr, unsigned long len, unsigned long align_size,
unsigned long pgoff, unsigned long flags)
{
struct mm_struct *mm = current->mm;
if (mm->get_unmapped_area == arch_get_unmapped_area)
return arch_get_unmapped_area_bottomup_sz(file, addr, len, align_size,
pgoff, flags);
return arch_get_unmapped_area_topdown_sz(file, addr, len, align_size,
pgoff, flags);
}
EXPORT_SYMBOL(arch_get_unmapped_area_sz);
......@@ -150,6 +150,27 @@ void __init early_alloc_pgt_buf(void)
int after_bootmem;
early_param_on_off("gbpages", "nogbpages", direct_gbpages, CONFIG_X86_DIRECT_GBPAGES);
//FIXME: Pass it via early param
// Place it here - should resolve after moving to v4.8
int direct_gbpages
#ifdef CONFIG_DIRECT_GBPAGES
= 1
#endif
;
static void __init init_gbpages(void)
{
#ifdef CONFIG_X86_64
if (direct_gbpages && cpu_has_gbpages)
printk(KERN_INFO "Using GB pages for direct mapping\n");
else
{
printk(KERN_INFO "direct_gbpages(%d). cpu_has_gbpages(%d)."
"GB pages not supported.\n", direct_gbpages, cpu_has_gbpages);
direct_gbpages = 0;
}
#endif
}
struct map_range {
unsigned long start;
......
......@@ -20,6 +20,7 @@
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
#include <asm/pat.h>
#include <asm/cpufeature.h>
#include "physaddr.h"
......@@ -79,8 +80,9 @@ static int __ioremap_check_ram(unsigned long start_pfn, unsigned long nr_pages,
* have to convert them into an offset in a page-aligned mapping, but the
* caller shouldn't need to know that small detail.
*/
static void __iomem *__ioremap_caller(resource_size_t phys_addr,
unsigned long size, enum page_cache_mode pcm, void *caller)
static void __iomem *___ioremap_caller(resource_size_t phys_addr,
unsigned long size, unsigned long prot_val, void *caller,
unsigned int hpages, unsigned int readonly)
{
unsigned long offset, vaddr;
resource_size_t pfn, last_pfn, last_addr;
......@@ -171,6 +173,10 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
break;
}
/* Map pages RO */
if (readonly)
prot = __pgprot((unsigned long)prot.pgprot & ~_PAGE_RW);
/*
* Ok, go for it..
*/
......@@ -183,8 +189,16 @@ static void __iomem *__ioremap_caller(resource_size_t phys_addr,
if (kernel_map_sync_memtype(phys_addr, size, pcm))
goto err_free_area;
if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot))
goto err_free_area;
if (hpages)
{
if (ioremap_hpage_range(vaddr, vaddr + size, phys_addr, prot))
goto err_free_area;
}
else
{
if (ioremap_page_range(vaddr, vaddr + size, phys_addr, prot))
goto err_free_area;
}
ret_addr = (void __iomem *) (vaddr + offset);
mmiotrace_ioremap(unaligned_phys_addr, unaligned_size, ret_addr);
......@@ -204,6 +218,21 @@ err_free_memtype:
return NULL;
}
/*
* Remap an arbitrary physical address space into the kernel virtual
* address space. Needed when the kernel wants to access high addresses
* directly.
*
* NOTE! We need to allow non-page-aligned mappings too: we will obviously
* have to convert them into an offset in a page-aligned mapping, but the
* caller shouldn't need to know that small detail.
*/
static void __iomem *__ioremap_caller(resource_size_t phys_addr,
unsigned long size, unsigned long prot_val, void *caller)
{
return ___ioremap_caller(phys_addr, size, prot_val, caller, 0, 0);
}
/**
* ioremap_nocache - map bus memory into CPU space
* @phys_addr: bus address of the memory
......@@ -313,9 +342,40 @@ void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
{
return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB,
__builtin_return_address(0));
void __iomem *
ioremap_hpage_cache(resource_size_t phys_addr, unsigned long size)
{
/* Map using hugepages */
return ___ioremap_caller(phys_addr, size, _PAGE_CACHE_WB,
__builtin_return_address(0), 1, 0);
}
EXPORT_SYMBOL(ioremap_hpage_cache);
void __iomem *
ioremap_hpage_cache_ro(resource_size_t phys_addr, unsigned long size)
{
/* Map using hugepages */
return ___ioremap_caller(phys_addr, size, _PAGE_CACHE_WB,
__builtin_return_address(0), 1, 1);
}
EXPORT_SYMBOL(ioremap_hpage_cache_ro);
void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
{
/* Map using 4k pages */
return ___ioremap_caller(phys_addr, size, _PAGE_CACHE_WB,
__builtin_return_address(0), 0, 0);
}
EXPORT_SYMBOL(ioremap_cache);
void __iomem *ioremap_cache_ro(resource_size_t phys_addr, unsigned long size)
{
/* Map using 4k pages */
return ___ioremap_caller(phys_addr, size, _PAGE_CACHE_WB,
__builtin_return_address(0), 0, 1);
}
EXPORT_SYMBOL(ioremap_cache_ro);
void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size,
unsigned long prot_val)
{
......
......@@ -430,6 +430,8 @@ static int pat_pagerange_is_ram(resource_size_t start, resource_size_t end)
unsigned long end_pfn = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
struct pagerange_state state = {start_pfn, 0, 0};
if (start_pfn >= max_pfn)
return 0;
/*
* For legacy reasons, physical address range in the legacy ISA
* region is tracked as non-RAM. This will allow users of
......
#include <linux/mm.h>
#include <linux/gfp.h>
#include <linux/export.h>
#include <asm/pgalloc.h>
#include <asm/pgtable.h>
#include <asm/tlb.h>
......@@ -423,6 +424,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
return changed;
}
EXPORT_SYMBOL(ptep_set_access_flags);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
int pmdp_set_access_flags(struct vm_area_struct *vma,
......
......@@ -12,6 +12,8 @@ if BLOCK
config FS_IOMAP
bool
depends on EXT2_FS_XIP || PMFS_XIP
default y
source "fs/ext2/Kconfig"
source "fs/ext4/Kconfig"
......@@ -245,6 +247,7 @@ source "fs/romfs/Kconfig"
source "fs/pstore/Kconfig"
source "fs/sysv/Kconfig"
source "fs/ufs/Kconfig"
source "fs/pmfs/Kconfig"
source "fs/exofs/Kconfig"
endif # MISC_FILESYSTEMS
......
......@@ -129,3 +129,4 @@ obj-y += exofs/ # Multiple modules
obj-$(CONFIG_CEPH_FS) += ceph/
obj-$(CONFIG_PSTORE) += pstore/
obj-$(CONFIG_EFIVAR_FS) += efivarfs/
obj-$(CONFIG_PMFS) += pmfs/
config PMFS
tristate "Persistent and Protected PM file system support"
depends on HAS_IOMEM
select CRC16
help
If your system has a block of fast (comparable in access speed to
system memory) and non-volatile byte-addressable memory and you wish to
mount a light-weight, full-featured, and space-efficient filesystem over
it, say Y here, and read <file:Documentation/filesystems/pmfs.txt>.
To compile this as a module, choose M here: the module will be
called pmfs.
config PMFS_XIP
bool "Execute-in-place in PMFS"
depends on PMFS && BLOCK
help
Say Y here to enable XIP feature of PMFS.
config PMFS_WRITE_PROTECT
bool "PMFS write protection"
depends on PMFS && MMU && HAVE_SET_MEMORY_RO
default y
help
Say Y here to enable the write protect feature of PMFS.
config PMFS_TEST
boolean
depends on PMFS
config PMFS_TEST_MODULE
tristate "PMFS Test"
depends on PMFS && PMFS_WRITE_PROTECT && m
select PMFS_TEST
help
Say Y here to build a simple module to test the protection of
PMFS. The module will be called pmfs_test.
#
# Makefile for the linux pmfs-filesystem routines.
#
obj-$(CONFIG_PMFS) += pmfs.o
obj-$(CONFIG_PMFS_TEST_MODULE) += pmfs_test.o
pmfs-y := bbuild.o balloc.o dir.o file.o inode.o namei.o super.o symlink.o ioctl.o journal.o
pmfs-$(CONFIG_PMFS_WRITE_PROTECT) += wprotect.o
pmfs-$(CONFIG_PMFS_XIP) += xip.o
/*
* PMFS emulated persistence. This file contains code to
* handle data blocks of various sizes efficiently.
*
* Persistent Memory File System
* Copyright (c) 2012-2013, Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT