Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introduce bfq i/o scheduler #2

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

introduce bfq i/o scheduler #2

wants to merge 3 commits into from

Conversation

xryoshi
Copy link

@xryoshi xryoshi commented Dec 14, 2012

No description provided.

BFQ uses struct cfq_io_context to store its per-process per-device data,
reusing the same code for cic handling of CFQ.  The code is not shared
ATM to minimize the impact of these patches.

This patch introduces a new hlist to each io_context to store all the
cic's allocated by BFQ to allow calling the right destructor on module
unload; the radix tree used for cic lookup needs to be duplicated
because it can contain dead keys inserted by a scheduler and later
retrieved by the other one.

Update the io_context exit and free paths to take care also of
the BFQ cic's.

Change the type of cfqq inside struct cfq_io_context to void *
to use it also for BFQ per-queue data.

A new bfq-specific ioprio_changed field is necessary, too, to avoid
clobbering cfq's one, so switch ioprio_changed to a bitmap, with one
element per scheduler.

Signed-off-by: Paolo Valente <[email protected]>
Signed-off-by: Arianna Avanzini <[email protected]>
Add a Kconfig option and do the related Makefile changes to compile
the BFQ I/O scheduler.  Also let the cgroups subsystem know about the
BFQ I/O controller.

Signed-off-by: Paolo Valente <[email protected]>
Signed-off-by: Arianna Avanzini <[email protected]>
Add the BFQ-v5r1 I/O scheduler to 3.1.
The general structure is borrowed from CFQ, as much code. A (bfq_)queue is
associated to each task doing I/O on a device, and each time a scheduling
decision has to be taken a queue is selected and it is served until it expires.

    - Slices are given in the service domain: tasks are assigned budgets,
      measured in number of sectors. Once got the disk, a task must
      however consume its assigned budget within a configurable maximum time
      (by default, the maximum possible value of the budgets is automatically
      computed to comply with this timeout). This allows the desired latency
      vs "throughput boosting" tradeoff to be set.

    - Budgets are scheduled according to a variant of WF2Q+, implemented
      using an augmented rb-tree to take eligibility into account while
      preserving an O(log N) overall complexity.

    - A low-latency tunable is provided; if enabled, both interactive and soft
      real-time applications are guaranteed very low latency.

    - Latency guarantees are preserved also in presence of NCQ.

    - High throughput with flash-based devices, while still preserving
      latency guarantees.

    - Useful features borrowed from CFQ: cooperating-queues merging (with
      some additional optimizations with respect to the original CFQ version),
      static fallback queue for OOM.

    - BFQ supports full hierarchical scheduling, exporting a cgroups
      interface.  Each node has a full scheduler, so each group can
      be assigned its own ioprio and an ioprio_class.

    - If the cgroups interface is used, weights can be explictly assigned,
      otherwise ioprio values are mapped to weights using the relation
      weight = IOPRIO_BE_NR - ioprio.

    - ioprio classes are served in strict priority order, i.e., lower
      priority queues are not served as long as there are higher priority
      queues.  Among queues in the same class the bandwidth is distributed
      in proportion to the weights of each queue. A very thin extra bandwidth
      is however guaranteed to the Idle class, to prevent it from starving.

Signed-off-by: Paolo Valente <[email protected]>
Signed-off-by: Arianna Avanzini <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants