Skip to content
  • Linus Torvalds's avatar
    tty_buffer: get rid of 'seen_tail' logic in flush_to_ldisc · 81de916f
    Linus Torvalds authored
    The flush_to_ldisc() work entry has special logic to notice when it has
    seen the original tail of the data queue, and it avoids continuing the
    flush if it sees that _original_ tail rather than the current tail.
    
    This logic can trigger in case somebody is constantly adding new data to
    the tty while the flushing is active - and the intent is to avoid
    excessive CPU usage while flushing the tty, especially as we used to do
    this from a softirq context which made it non-preemptible.
    
    However, since we no longer re-arm the work-queue from within itself
    (because that causes other trouble: see commit a5660b41
    
     "tty: fix
    endless work loop when the buffer fills up"), this just leads to
    possible hung tty's (most easily seen in SMP and with a test-program
    that floods a pty with data - nobody seems to have reported this for any
    real-life situation yet).
    
    And since the workqueue isn't done from timers and softirq's any more,
    it's doubtful whether the CPU useage issue is really relevant any more.
    So just remove the logic entirely, and see if anybody ever notices.
    
    Alternatively, we might want to re-introduce the "re-arm the work" for
    just this case, but then we'd have to re-introduce the delayed work
    model or some explicit timer, which really doesn't seem worth it for
    this.
    
    Reported-and-tested-by: default avatarGuillaume Chazarain <guichaz@gmail.com>
    Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
    Cc: Felipe Balbi <balbi@ti.com>
    Cc: Greg Kroah-Hartman <gregkh@suse.de>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    81de916f