Skip to content
  • David Woodhouse's avatar
    [JFFS2] Reduce excessive node count for syslog files. · cf5eba53
    David Woodhouse authored
    
    
    We currently get fairly poor behaviour with files which get many short
    writes, such as system logs. This is because we end up with many tiny
    data nodes, and the rbtree gets massive. None of these nodes are
    actually obsolete, so they are counted as 'clean' space. Eraseblocks can
    be entirely full of these nodes (which are REF_NORMAL instead of
    REF_PRISTINE), and still they count entirely towards 'used_size' and the
    eraseblocks can sit on the clean_list for a long time without being
    picked for GC.
    
    One way to alleviate this in the long term is to account REF_NORMAL
    space separately from REF_PRISTINE space, rather than counting them both
    towards used_size. Then these eraseblocks can be picked for GC and the
    offending nodes will be garbage collected.
    
    The short-term fix, though -- which probably makes sense even if we do
    eventually implement the above -- is to merge these nodes as they're
    written. When we write the last byte in a page, write the _whole_ page.
    This obsoletes the earlier nodes in the page _immediately_ and we don't
    even need to wait for the garbage collection to do it.
    
    Original implementation from Ferenc Havasi <havasi@inf.u-szeged.hu>
    Signed-off-by: default avatarDavid Woodhouse <dwmw2@infradead.org>
    cf5eba53