A practical comparison of list, deque, heapq, and queue module for queue implementations.
| Data Structure | Best For | append |
pop left |
pop right |
peek min/max |
Random Access |
|---|---|---|---|---|---|---|
list |
Stack (LIFO) | O(1)* | O(n) | O(1)* | O(n) | O(1) |
collections.deque |
Queue (FIFO) | O(1) | O(1) | O(1) | O(n) | O(n) |
heapq |
Priority Queue | O(log n) | O(log n) | — | O(1) | O(n) |
queue.Queue |
Thread-safe FIFO | O(1) | O(1) blocking | — | — | — |
queue.PriorityQueue |
Thread-safe Priority | O(log n) | O(log n) blocking | — | — | — |
*amortized
- You need a stack (LIFO — last in, first out)
- You need random access by index
- You're only adding/removing from one end
stack = []
stack.append("first")
stack.append("second")
stack.pop() # "second" — O(1)
stack[0] # "first" — O(1) random accessDon't use list for: FIFO queues — list.pop(0) is O(n)!
- You need a FIFO queue (first in, first out)
- You add/remove from both ends
- You want a sliding window with
maxlen
from collections import deque
# FIFO queue
queue = deque()
queue.append("first") # Add to right
queue.append("second")
queue.popleft() # "first" — O(1)
# Sliding window (keeps last 3)
window = deque(maxlen=3)
for i in range(5):
window.append(i)
print(window) # deque([2, 3, 4])Don't use deque for: Random access by index — deque[i] is O(n) for large deques.
- You need a priority queue (highest/lowest priority first)
- Order matters but isn't simple FIFO
- You're implementing Dijkstra's, task schedulers, merge k sorted lists
import heapq
# Priority queue (min-heap)
heap = []
heapq.heappush(heap, (3, "low priority"))
heapq.heappush(heap, (1, "high priority"))
heapq.heappush(heap, (2, "medium priority"))
heapq.heappop(heap) # (1, "high priority") — O(log n)
# For max-heap, negate the priority
heapq.heappush(heap, (-priority, item))Don't use heapq for: Simple FIFO — it's overkill and slower than deque.
- You need thread-safe operations
- You want blocking get/put (wait for items)
- You're implementing producer-consumer patterns
from queue import Queue, PriorityQueue
import threading
q = Queue()
def producer():
q.put("task")
def consumer():
item = q.get() # Blocks until item available
process(item)
q.task_done()
# Wait for all tasks to complete
q.join()Don't use queue module for: Single-threaded code — the locking overhead is unnecessary.
┌───┬───┬───┬───┬───┬───┬───┬───┐
│ a │ b │ c │ d │ e │ │ │ │ ← Contiguous memory
└───┴───┴───┴───┴───┴───┴───┴───┘
↑
capacity for growth
- Structure: Contiguous array that grows by ~1.125x when full
- Why
appendis O(1) amortized: Usually just writes to next slot; occasionally reallocates and copies - Why
pop(0)is O(n): Must shift all elements left to fill the gap
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Block 0 │ ←→ │ Block 1 │ ←→ │ Block 2 │
│ [a,b,c,...] │ │ [h,i,j,...] │ │ [o,p,_,_,_] │
│ 64 items │ │ 64 items │ │ partially │
└─────────────┘ └─────────────┘ └─────────────┘
↑ ↑
leftblock rightblock
- Structure: Doubly-linked list where each node holds 64 elements
- Why O(1) at both ends: Just update index within block, or link new block
- Why O(n) random access: Must traverse blocks to find index
- Memory: ~8 bytes per element (amortized) vs ~48 bytes for naive linked list
[0] Array: [1, 3, 2, 7, 4, 5]
1
/ \ Parent of i: (i-1) // 2
[1] [2] Left child: 2*i + 1
3 2 Right child: 2*i + 2
/ \ /
[3] [4][5]
7 4 5
- Structure: Complete binary tree stored in a flat array
- Why O(log n) push/pop: Tree height is log n; must bubble up/down
- Why O(1) peek min: Minimum is always at index 0 (root)
- Space efficient: No pointers, just array indices
import timeit
from collections import deque
import heapq
n = 100_000
# List as queue (FIFO) — DON'T DO THIS
def list_queue():
q = []
for i in range(n):
q.append(i)
for i in range(n):
q.pop(0) # O(n) each time!
# Deque as queue (FIFO) — CORRECT
def deque_queue():
q = deque()
for i in range(n):
q.append(i)
for i in range(n):
q.popleft() # O(1)
# Heap as priority queue
def heap_queue():
q = []
for i in range(n):
heapq.heappush(q, i)
for i in range(n):
heapq.heappop(q)Results:
| Implementation | Time (100k ops) | Relative |
|---|---|---|
deque FIFO |
~0.02s | 1x |
heapq priority |
~0.08s | 4x |
list.pop(0) FIFO |
~2.5s | 125x |
The list.pop(0) approach is catastrophically slow for queues.
# WRONG — O(n²) total for n operations
queue = []
queue.append(item)
item = queue.pop(0) # Shifts all elements!
# RIGHT — O(n) total
from collections import deque
queue = deque()
queue.append(item)
item = queue.popleft() # O(1)# WRONG — O(n) to find highest priority
from collections import deque
tasks = deque()
tasks.append((priority, task))
# How do you get highest priority? Scan entire deque...
# RIGHT — O(log n) to get highest priority
import heapq
tasks = []
heapq.heappush(tasks, (-priority, task)) # Negate for max-heap
highest = heapq.heappop(tasks)# OVERKILL — heapq is slower than deque for FIFO
import heapq
counter = 0
queue = []
def enqueue(item):
global counter
heapq.heappush(queue, (counter, item))
counter += 1
def dequeue():
return heapq.heappop(queue)[1]
# SIMPLER AND FASTER
from collections import deque
queue = deque()
queue.append(item)
queue.popleft()# WRONG in multi-threaded code
from collections import deque
shared_queue = deque() # Not thread-safe for compound operations!
# Thread A Thread B
if len(shared_queue) > 0: if len(shared_queue) > 0: # Both true!
shared_queue.popleft() shared_queue.popleft() # One fails!
# RIGHT
from queue import Queue
shared_queue = Queue()
try:
item = shared_queue.get_nowait()
except Empty:
pass
# Or use blocking: item = shared_queue.get() # Waits for itemDo you need priority ordering?
│
├── YES → Do you need thread safety?
│ │
│ ├── YES → queue.PriorityQueue
│ └── NO → heapq
│
└── NO → Do you need both-ends access?
│
├── YES → collections.deque
│
└── NO → Which end do you use?
│
├── Right only (stack) → list
│
└── Left only (queue) → Do you need thread safety?
│
├── YES → queue.Queue
└── NO → collections.deque
For our interview Task Queue problem, here's how each approach would work:
from collections import deque
class FIFOTaskQueue:
def __init__(self):
self._queue = deque()
self._tasks = {} # For status lookup
def add_task(self, task_id, payload):
task = {"id": task_id, "payload": payload, "status": "queued"}
self._tasks[task_id] = task
self._queue.append(task_id)
def get_next_task(self): # O(1)
while self._queue:
task_id = self._queue.popleft()
task = self._tasks.get(task_id)
if task and task["status"] == "queued":
task["status"] = "processing"
return task
return NoneTime: O(1) for both operations Use when: Tasks are processed in arrival order (FIFO)
import heapq
class PriorityTaskQueue:
def __init__(self):
self._heap = []
self._tasks = {}
self._counter = 0
def add_task(self, task_id, priority, payload):
task = {"id": task_id, "priority": priority, "payload": payload, "status": "queued"}
self._tasks[task_id] = task
heapq.heappush(self._heap, (-priority, self._counter, task_id))
self._counter += 1
def get_next_task(self): # O(log n)
while self._heap:
_, _, task_id = heapq.heappop(self._heap)
task = self._tasks.get(task_id)
if task and task["status"] == "queued":
task["status"] = "processing"
return task
return NoneTime: O(log n) for both operations Use when: Tasks have different priorities
class NaivePriorityTaskQueue:
def __init__(self):
self._queue = [] # Sorted by priority
self._tasks = {}
def add_task(self, task_id, priority, payload):
task = {"id": task_id, "priority": priority, "payload": payload, "status": "queued"}
self._tasks[task_id] = task
self._queue.append(task_id)
self._queue.sort(key=lambda tid: -self._tasks[tid]["priority"]) # O(n log n)!
def get_next_task(self): # O(n) due to list operations
while self._queue:
task_id = self._queue.pop(0) # O(n) shift!
task = self._tasks.get(task_id)
if task and task["status"] == "queued":
task["status"] = "processing"
return task
return NoneTime: O(n log n) add, O(n) get — much slower! Use when: Never for production; acceptable for interview if candidate understands tradeoff
| Queue Type | Data Structure | Add | Remove | Use Case |
|---|---|---|---|---|
| FIFO | deque |
O(1) | O(1) | Task queue, BFS, buffers |
| LIFO (Stack) | list |
O(1) | O(1) | DFS, undo history, call stack |
| Priority | heapq |
O(log n) | O(log n) | Schedulers, Dijkstra, merge k lists |
| Thread-safe FIFO | queue.Queue |
O(1) | O(1) blocking | Producer-consumer, worker pools |
| Thread-safe Priority | queue.PriorityQueue |
O(log n) | O(log n) blocking | Concurrent task scheduling |