Last reviewed and updated: 10 August 2020
After having the opportunity to teach our course Writing WDF Drivers for Windows several zillion times now, we’ve learned quite a few things. One of the most important things we’ve learned is that the WDFQUEUE is one of the most underappreciated of the Framework objects. This is unfortunate, because WDF Queues are one of the most interesting and powerful objects that you’ll use in your WDF driver. Plus, the WDF Queue is an object type that just about every WDF driver uses. It therefore seems reasonable that you should know a bit about these most excellent WDF Queue Objects.
Not Your Ordinary Queue
When we explain WDF Queues in our classes, the first problem that arises is that most devs have a pre-conceived notion of the meaning of the word “queue”. When they hear this term, they immediately think of linked lists. If they’re WDM driver writers, they think of LIST_ENTRY. While WDFQUEUEs can be used similarly to standard linked lists if you want, they are in fact much more special than those ordinary linked lists. In fact, WDF Queues are the primary mechanism that WDF drivers use to sort, manage, and control the delivery of Requests for processing.
Queues are of primary importance to WDF Drivers because they provide the most common method for delivering I/O requests from the Framework to the driver. As the Framework receives I/O requests from the Windows I/O Manager, it will insert the WDF Request object that represents that request on one of the driver’s Incoming Request Queues. How those Requests are ultimately delivered to the driver depends on the Queues Dispatch Type.
Queue Dispatch Types
When you configure and subsequently create a WDF Queue, you specify the Queue’s Dispatch Type. Dispatch Type controls how many Requests from the Queue can be in progress in your driver at a time, and how those Requests are presented. The possible Queue Dispatch Types are:
Sequential Dispatch Type
In Sequential Dispatching, only one Request from the Queue may be in-progress in your driver at one time. A new Request is not presented to your driver until your driver:
- Completes the current Request (regardless of the completion status)
- Forwards the current Request to another Queue
- Sends the current Request to an I/O Target, using the Send and Forget option.
Parallel Dispatch Type
When you specify Parallel Dispatching for a Queue, multiple Requests can be in progress in your driver from that Queue simultaneously. Instead of waiting for one Request to finish before presenting your driver with another Request, as in Sequential Dispatching, in Parallel Dispatching the Framework will continue to present your driver with Requests from the Queue until some maximum number of Requests are in progress from that Queue. You specify the maximum number of Requests that can be in progress at one time when you configure the Queue. By default, the maximum number of Requests is “unlimited.”
If you’re clever, you’ll note that using Parallel Dispatching and setting the maximum number of in-progress Requests to one gets you exactly the same behavior as what you get from Sequential Dispatching. Good catch!
Manual Dispatch Type
Manual Dispatching is very different from other Queue Dispatch Types. When you specify Manual Dispatching, Requests are explicitly retrieved from the Queue by your driver; the framework never calls your driver to present Requests from the Queue.
Because the Manual Dispatch Type requires a driver to actively “pull” Requests from the Queue, Queues with this Dispatch Type are almost never used for delivering Requests from the Framework to a driver. Consider why this is the case: If you use a Queue with the Manual Dispatch Type to receive Requests from the Framework, your driver would have to poll the Queue – periodically attempting to remove entries – in order to get work. When compared to the other Dispatch Types, which present Requests to your driver when appropriate, this isn’t a very appealing alternative.
Queues with the Manual Dispatch Type are almost always used to hold Requests that have been previously presented to a driver via another Queue. Thus, Queues with the Manual Dispatch Type are most like ordinary queues or linked lists. They’re used to hold items that originated elsewhere until needed.
So, a Queue’s Dispatch Type controls both how Requests are delivered to your driver (by presenting Requests to your driver, or by your driver manually calling a function to remove a Request from the Queue) and how many Requests may be in progress in your driver at a time (for Sequential Dispatching one Request can be in progress at a time, for Parallel Dispatching multiple requests – up to a limit defined by your driver – can be in progress at a time). Queues with Sequential Dispatching or Parallel Dispatching are most often used to control the flow of Requests from the Framework to a driver. Manual Dispatching are typically used to temporarily hold Requests that a driver has received via another method.
Queue States
Another way in which WDF Queues differ from ordinary linked lists is that WDF Queues are actively controlled by the Framework. Each WDF Queue may be in one of several different states:
- Started: If a Queue is in the Started state, when a Request arrives the Queue will immediately consider it for presentation to the driver according to the rules for its Dispatch Type. For example, if a new Request arrives at an empty Queue in Started state, and that Queue uses the Sequential Dispatch Type, the Request will be presented to the driver if there are currently no other Requests active in the driver from that Queue.
- Stopped: When a Queue is in the Stopped state, newly arriving Requests will be inserted on the Queue and held on the Queue indefinitely. If the Queue’s state changes to Started, Requests pending on the Queue will be evaluated for presentation according to the rules for the Queue’s Dispatch Type.
Therefore, if the previously described empty Sequential Queue is in the Stopped state when a Request arrives, that Request will be placed on the Queue. Later, if the Queue’s state changes to Started the first Request that arrived to the Queue will be presented to the driver. Because the Queue in this example uses the Sequential Dispatch type, additional Requests will not be presented until the previously presented Request is completed or forwarded as previously described.
- Purged: When a Queue is in the Purged state any Requests that are on the Queue or that arrive for that Queue are immediately completed by the Framework with an error status. Therefore, if a Queue that was in the Stopped state is changed (by command from either the driver or the Framework) to the Purged state, any Requests that happened to be waiting on the Queue are completed by the Framework with an error status. Any new Requests that arrive for this Queue are also completed by the Framework with an error status; they are not inserted onto the Queue.
Power Managed
An extremely handy feature of WDF Queues is that they are “power managed” by default. This meaning that the Framework will automatically change the Queue’s state according to the D-State of the device with which the Queue is associated.
When a device is in the fully powered, working (D0), state the Framework sets that device’s Queues to the Started state. This results in arriving Requests being presented to the driver according to the rules for the Queue’s Dispatch Type, as previously described. When the device transitions to a non-working power state (that is, any device power state other than D0), the Framework will automatically put the Queue into the Stopped state, resulting in Requests arriving at the Queue being held. In addition, if a Request arrives at a power managed Queue while the device is idling in a low power state (any state other than D0, as a result of the device putting itself to sleep to save power) the Framework will automatically initiate the process to return the device to the working (D0) power state so it can resume processing Requests.
This is a terrific feature, because it frees drivers from having to be concerned about what power state the device is in when Requests arrive. You certainly don’t want read or write requests to be presented to your driver before your driver has had the chance to power-up your device to handle them, right? Right! And, with power managed Queues, the Framework keeps that from happening.
Of course, you can override the Framework’s default behavior by setting the Queue to be non-power managed. In this case, the state of the Queue isn’t changed automatically based on the associated device’s D-state.
Automatic Cancellation
A final feature of WDF Queues that makes WDF driver development convenient is that Queues handle cancellation of pending Requests automatically. This means that if a Request is cancelled when it is pending on a Queue the Framework will automatically remove that Request from the Queue and complete it with a cancelled status. A driver can choose to be informed about the cancellation by specifying an appropriate Event Processing Callback (discussed later). Reasons that a Request might be cancelled when it’s on a Queue include the thread that initiated the Request exiting, or the issuing thread calling CancelIo or the issuing process calling CancelIoEx.
Queue Events and Event Processing Callbacks
So how exactly do Queues pass Requests to your driver for processing? It’s simple: Like all WDF Objects, Queues are capable of raising a given set of events. A driver may choose to handle a subset of these events by providing appropriate Event Processing Callbacks.
The most important Queue-based Event Processing Callbacks available to your driver are the I/O Event Processing Callbacks. These callbacks are invoked by the Framework to present Requests from a Queue to your driver. On each of these callbacks, the Framework passes your driver a handle to the Queue from which the Request originated, and a handle to the Request itself. Other parameters vary, depending on the specific callback.
Note that I/O Event Processing Callbacks are only used for Queues that have been configured with the Sequential or Parallel Dispatch Type. Queues that are configured with the Manual Dispatch Type do not raise I/O events, and thus I/O Event Processing Callbacks are not allowed for these Queues. In fact, the Framework returns an error if you try to specify an I/O Event Processing Callback for a Queue with the Manual Dispatch Type.
The I/O Event Processing Callbacks that your driver can handle are:
- EvtIoRead: This callback is invoked by the Framework when it has a read Request to be presented to the driver from the Queue.
- EvtIoWrite: This callback is invoked by the Framework when it has a write Request to be presented to the driver from the Queue.
- EvtIoDeviceControl: This callback is invoked by the Framework when it has a Device I/O Control (IOCTL) Request to be presented to the driver from the Queue.
- EvtIoInternalDeviceControl: This callback is invoked by the Framework when it has an Internal Device Control Request to be presented to the driver from the Queue.
- EvtIoDefault: This callback is invoked by the Framework when it has a Request to be presented to the driver from the Queue, and one of the more specific I/O Event Processing Callbacks has not been specified by the driver.
If you think about it a bit, I expect you’ll understand how the above I/O Event Processing Callbacks are used. Consider, for example, a driver that has a single Queue using Parallel Dispatching for which EvtIoRead, EvtIoWrite and EvtIoDefault Event Processing Callbacks have been supplied. If a read operation is sent to the device, the Framework will present that Request to the driver by calling the EvtIoRead Event Processing callback. Likewise, if a write operation is sent to the device, the Framework will present that Request by calling the driver’s EvtIoWrite Event Processing Callback. However, if a Device Control Request arrives at the Queue, the Framework will present that Request via the driver’s EvtIoDefault Event Processing Callback, because the driver did not configure the EvtIoDeviceControl Event Processing Callback to handle this specific type of Request.
It is extremely important to note that the EvtIoXxx Event Processing callbacks to your driver occur in an arbitrary process and thread context, and (by default) at an IRQL less than or equal to IRQL DISPATCH_LEVEL. That means you can’t know in advance what thread is running when one of your EvtIoXxx callbacks gets called, and also that (by default) you can’t touch pagable memory or use Dispatcher Objects (such as mutexes and events) for synchronization within these callbacks.
There are also Event Processing Callbacks that allow your driver to be informed when a Queue transitions into and out of the Stopped state. These callbacks are only needed in rare cases, and are thus much less frequently used than the I/O Event Processing Callbacks.
Before leaving our discussion of Event Processing Callbacks, we should note one more type of Event Processing Callback that your driver can specify: That’s the Event Processing Callback for Request cancellation. If a Request is currently on a Queue and is aborted (either as a result of the user attempting to cancel it or the thread that issued it attempting to exit), the Framework will call a driver’s EvtIoCanceledOnQueueEvent Processing Callback. Note that this callback is only invoked when a Request is canceled while on a Queue (or immediately before being queued), thus it’s not likely to provide much value in the case of a parallel Queue.
Incoming Request Queues
A driver is free to create as many Queues as it likes. However, at least one of these Queues will need to be an Incoming Request Queue. An Incoming Request Queue is a Queue through which the Framework presents Requests to the driver.
The most common way for a driver to specify an Incoming Request Queue is by marking a Queue it creates as the Default Queue for a device. A driver can create exactly one default Queue for a given device. In addition (or, in fact, even in place of) a default Queue, a driver may choose to create one or more additional non-default Queues and indicate to the Framework that these Queues are Incoming Request Queues for specific Request types. A driver does this by configuring Request dispatching, indicating the Queue and the particular request type (such as read, write, or device control) that will be routed to the Queue from the Framework.
Most WDF drivers utilize a single, default, Incoming Request Queue for receiving Requests from the Framework. However, the ability to use multiple WDF Queues to sort and organize Requests is one of the best features of WDF Queues!
Using Multiple Queues
It might not be immediately apparent why using multiple WDF Queues can be handy. Let’s look at a few examples to illustrate some common uses for this feature.
As an example, let’s consider a driver for a simple device that processes read and write operations, and that also has a set of IOCTL control codes that can be used to enable, disable, and get statistics for the device. Maybe the device is a simple point-to-point communications link. The exact function of the device doesn’t matter. What does matter is that it supports the previously specified three types of I/O requests.
Like most devices, our example device can only handle read and write operations when the device is fully powered on (in D0). This won’t present any problems for us because WDF Queues are, by default, power managed. That means that if we use a power managed Queue to handle incoming read and write operations, those Requests will only be presented to the driver when the device is in its fully powered state.
However, note the IOCTLs that that driver must support. These control codes, to enable and disable the device and to gather statistics from the device, can be handled by the driver without regard to the device’s power state. If we configure our driver to send all arriving I/O requests to a single Queue that is Power Managed, the IOCTLs won’t be delivered when the device is in a lower power state. And, if we choose to have all I/O requests delivered to a single queue that is not Power Managed, the Framework will deliver us read and write requests to the driver when the device is in low power state and they can’t be handled!
What do we do? Well, to support this device, a WDF driver can choose to configure twoIncoming Request Queues: One default Power Managed Queue that handles the read and write requests, and another Queue that is not power managed, that that driver configures to handle only Device Control requests. In this way, read and write operations are handled by the Framework according to the power management state of the device, and IOCTLs are delivered to the driver without regard to the device’s power state. Problem Solved!
How about another example: Did you ever have to write a driver for a device that can handle one read plus one write simultaneously, but not two reads or two writes? This is a pretty common requirement, and if you’ve been writing drivers for a while you’ve probably encountered a device like this. This requirement is easily handled by configuring two Incoming Request Queues, each with the Sequential Dispatching Type. You configure one Queue to handle incoming read requests, and the other to handle incoming write requests.
Extending that last example a bit further, maybe you want to modify this driver to support device control operations. Adding this support is simple! Just configure another Incoming Request Queue and tell the Framework to route any received device control Requests to that Queue. Note that when you configure that Queue, you can easily choose whether the queue is Power Managed, and you can also choose how many device control Requests your driver will have in progress at a time by specifying the Dispatch Type. And you can do this easily, without disturbing the conditions under which your driver already handles read and write operations.
How About Those Manual Queues
So far, we’ve primarily focused on Queues that use the Sequential and Parallel Dispatch Types. How do Queues with the Manual Dispatch Type fit in?
As mentioned previously, you’re allowed to create as many Queues as you want, and not all of them need be Incoming Requests Queues. You might, for example, create a Queue with the Manual Dispatch Type that your driver uses to hold Requests that are waiting for some event to occur. In this case, your driver would forward Requests presented through one of your Incoming Request Queues to your manual Queue.
Drivers often need the ability to “park” Requests within the driver to be completed only when some asynchronous event occurs. For example, in the OSR USB FX2 device there is a switch pack that generates an interrupt with the state of the switches when they are toggled. Instead of having the application poll the driver to determine if something has changed, it would be nice to let the application send asynchronous IOCTLs that get completed when the switch pack changes.
This is a perfect fit for a manual Queue. Requests to read the switch pack arrive at one of the FX2 driver’s Incoming Request Queues and are then promptly forwarded to a manual Queue. When the device interrupts, the driver simply drains its manual Queue and completes each of the Requests it retrieves from that Queue with the state of the switch pack. Clean and simple, with the added benefit of cancellation of the Requests being completely handled for you!
Queue Gotchas
Of course, there are going to be some gotchas that you might run into sooner or later. In order to give your later the benefit of our sooner, here are some things to note when writing your driver:
- If a Request is presented to your driver from a Sequential Queue and you forward that Request to a secondary Queue, another request may be presented to your driver from the Queue. Thus, if your device only supports one I/O Request at a time, parking the in progress Request on a manual Queue is not an option.
- Don’t use the dispatch type as a cheap means of serialization. While it might at first seem tempting to set your Incoming Request Queues to sequential and never worry about locking, that’s not really the spirit of the sequential Queue. Sequential Queues should only be used when your device can only support one operation at a time, WDF’s Synchronization Scope (not covered in this article) should be used in all other cases.
- You cannot complete Requests while they are on a Queue.
Queue Power!
So, that’s a brief introduction to WDF Queues. We hope you agree that WDF Queues are one of the most powerful, interesting, and useful features of WDF.