# Data Structures: Time Complexities of Common Methods

Okay, here are the data structure tables regenerated, grouping similar interfaces and using rows for data structures and columns for operation running times.

Assumptions for Time Complexities:

  • N : Number of elements in the data structure.
  • h : Height of a tree.
  • log N : Assumed base 2 unless specified.
  • alpha(N) : Inverse Ackermann function, practically constant.
  • M : Table size (for hash tables), m : Order of B-Tree.

# Group 1: List / Sequence ADT Implementations

This group focuses on data structures that represent a linear sequence of elements, allowing access by index, insertion, and removal.

Data Structure / OperationinsertAtFrontinsertAtIndexremoveAtIndexfindIndex (Unsorted) / findData (Unsorted)findIndex (Sorted) / findData (Sorted)insertAtEndremoveAtEnd
Array List (Dynamic Array)O(N)O(N)O(N)O(N)O(log N)O(1) amortizedO(1)
Linked ListO(1)O(N)O(N)O(N)O(N)O(1) (with tail)O(N) (single)

Notes:

  • Array List insertAtEnd : O(1) amortized due to occasional O(N) resizing. insertAtFront and insertAtIndex are O(N) due to element shifting.
  • Linked List insertAtEnd : O(1) if a tail pointer is maintained; otherwise O(N) (requires traversal). removeAtEnd is O(N) for singly linked lists (requires finding the node before the last one).

# Group 2: Stack & Queue ADT Implementations

These are constrained list-like structures following LIFO (Stack) or FIFO (Queue) principles.

Data Structure / Operationpush / enqueuepop / dequeuetop / front / peek
Stack (LIFO)
Array-basedO(1) amortizedO(1)O(1)
Linked List-basedO(1)O(1)O(1)
Queue (FIFO)
Array-based (Circular)O(1) amortizedO(1)O(1)
Linked List-basedO(1)O(1)O(1)

Notes:

  • Array-based implementations for stacks/queues can have O(N) worst-case for push / enqueue if resizing is needed, but are O(1) amortized.
  • dequeue on a non-circular array queue could be O(N) if elements are shifted after removal, but typically a circular array or linked list is used for O(1).

# Group 3: Set / Dictionary / Map ADT Implementations (Key-Value Stores)

This group includes data structures that store key-value pairs and allow efficient lookup, insertion, and deletion based on the key.

Data Structure / Operationfind (Search)insertdelete
Binary Search Tree (BST)O(h) / O(N)O(h) / O(N)O(h) / O(N)
AVL Tree (Self-Balancing BST)O(log N)O(log N)O(log N)
Hash Table (Average Case)O(1)O(1)O(1)
Hash Table (Worst Case)O(N)O(N)O(N)
B-TreeO(log_m N)O(log_m N)O(log_m N)
k-d Tree ( nearestNeighbor )O(log N) (avg) / O(N) (worst)N/A (often built, not incrementally inserted)N/A
k-d Tree ( rangeSearch )O(sqrt(N)) (avg) / O(N) (worst)N/AN/A

Notes:

  • BST: h is the height of the tree. In the worst case (skewed tree), h can be N .
  • AVL Tree: Guarantees h = O(log N) by performing rotations.
  • Hash Table: Average case performance assumes a good hash function and low load factor. Worst case occurs with many collisions or a poor hash function. re-hashing operation is O(N).
  • B-Tree: m is the order (branching factor). log_m N is typically much smaller than log N for large m . Ideal for disk-based storage.
  • k-d Tree: Primarily used for spatial data; insertion/deletion not typically defined as common operations. Performance depends heavily on data distribution and query type.

# Group 4: Priority Queue ADT Implementations

This group focuses on data structures that support efficient extraction of the minimum (or maximum) element and insertion.

Data Structure / OperationinsertfindMin / peekMinremoveMinbuildHeap (from array)
Min-Heap (Binary Heap)O(log N)O(1)O(log N)O(N)
Unsorted ArrayO(1)O(N)O(N)N/A
Sorted ArrayO(N)O(1)O(N)N/A
Unsorted Linked ListO(1)O(N)O(N)N/A
Sorted Linked ListO(N)O(1)O(1)N/A

Notes:

  • Heap is the most efficient general-purpose implementation for Priority Queues.
  • Other list-based implementations are shown for comparison but are generally less efficient for combined operations.

# Group 5: Disjoint Set / Union-Find

A data structure that keeps track of a set of elements partitioned into a number of disjoint (non-overlapping) subsets.

Data Structure / OperationmakeSetunion (by size/rank + path compression)find (with path compression)
Disjoint SetO(1)O(alpha(N))O(alpha(N))

Notes:

  • alpha(N) (Inverse Ackermann function) is an extremely slowly growing function, essentially constant for all practical purposes (e.g., alpha(N) < 5 for any practically sized N ).

# Group 6: Graph Representations & Traversals

This group covers how graphs are stored and common algorithms for exploring them.

Representation / AlgorithmDFS (Depth First Search)BFS (Breadth First Search)addVertexaddEdgecheckAdjacency(u,v)getNeighbors(u)
Edge ListO(V + E) (impl. with map)O(V + E) (impl. with map)O(1) (list)O(1) (list)O(E)O(E)
Adjacency MatrixO(V^2)O(V^2)O(V^2)O(1)O(1)O(V)
Adjacency ListO(V + E)O(V + E)O(1)O(1)O(deg(u))O(deg(u))

Notes:

  • V : Number of vertices, E : Number of edges, deg(u) : Degree of vertex u .
  • Edge List: Storing just a list of (source, destination, weight) tuples.
    • addVertex / addEdge : Simple appends to lists.
    • checkAdjacency / getNeighbors : Requires iterating through the entire edge list to find relevant edges. These operations are inefficient.
    • DFS / BFS : While conceptually possible with an Edge List, these algorithms require efficient access to neighbors. To implement them efficiently with an Edge List, you would typically first convert the Edge List into an Adjacency List (which takes O(V+E) time) or use a map<int, vector<int>> or similar structure to map vertices to their incident edges, which essentially makes it behave like an Adjacency List for traversal purposes. Hence, the (impl. with map) note. It's rarely used directly for traversals in practice due to poor neighbor access.
  • Adjacency Matrix is better for dense graphs or when checkAdjacency is frequent.
  • Adjacency List is better for sparse graphs or when iterating through neighbors ( getNeighbors ) is frequent.

# Algorithms: Pseudocode and Complexity

# Minimum Spanning Tree (MST) Algorithms

Goal: Find a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight.

# Kruskal's Algorithm

Concept: Greedily adds the cheapest available edge that does not form a cycle. Uses a Disjoint Set data structure to detect cycles.

Pseudocode:

  1. Create a list of all edges in the graph.
  2. Sort all edges in non-decreasing order of their weights.
  3. Initialize a Disjoint Set data structure where each vertex is in its own set.
  4. Initialize an empty MST set.
  5. For each edge (u, v) from the sorted list:
    a. If find(u) is not equal to find(v) (i.e., u and v are in different sets):
    i. Add edge (u, v) to the MST set.
    ii. Perform union(u, v) .
  6. Return the MST set.

Time Complexity:

  • Sorting edges: O(E log E)
  • Disjoint Set operations: O(E * alpha(V)) (for E edges, V vertices with path compression and union by size/rank)
  • Total: O(E log E) or O(E log V) (since E <= V^2, log E is often similar to log V)

# Prim's Algorithm

Concept: Starts from an arbitrary vertex and grows the MST by adding the cheapest edge from a vertex in the MST to a vertex outside the MST. Typically uses a min-priority queue.

Pseudocode:

  1. Initialize a min-priority queue PQ to store edges.
  2. Initialize dist[v] = infinity for all vertices v , and parent[v] = null .
  3. Choose a starting vertex s . Set dist[s] = 0 .
  4. Add (0, s) (weight, vertex) to PQ .
  5. Initialize visited[v] = false for all vertices.
  6. While PQ is not empty:
    a. Extract the edge (weight, u) with the minimum weight from PQ .
    b. If visited[u] is true, continue (already processed).
    c. Set visited[u] = true .
    d. For each neighbor v of u (with edge weight w(u,v) ):
    i. If visited[v] is false and w(u,v) < dist[v] :
    1. Set dist[v] = w(u,v) .
    2. Set parent[v] = u .
    3. Add (dist[v], v) to PQ .
  7. The MST is formed by the edges (parent[v], v) for all v != s.

Time Complexity:

  • Using Binary Heap (Adj List): O(E log V)
  • Using Fibonacci Heap (Adj List): O(E + V log V)

# Shortest Path Algorithms

Goal: Find the path between two vertices (or from one vertex to all others) such that the sum of the edge weights along the path is minimized.

# Dijkstra's Algorithm

Concept: Finds the shortest paths from a single source vertex to all other vertices in a graph with non-negative edge weights. Uses a min-priority queue.

Pseudocode:

  1. Initialize dist[v] = infinity for all vertices v , and dist[source] = 0 .
  2. Initialize a min-priority queue PQ and add (0, source) (distance, vertex) to it.
  3. Initialize visited[v] = false for all vertices.
  4. While PQ is not empty:
    a. Extract the vertex u with the minimum dist value from PQ .
    b. If visited[u] is true, continue.
    c. Set visited[u] = true .
    d. For each neighbor v of u (with edge weight w(u,v) ):
    i. If dist[u] + w(u,v) < dist[v] :
    1. Set dist[v] = dist[u] + w(u,v) .
    2. Add (dist[v], v) to PQ .
  5. dist[v] now holds the shortest distance from source to v .

Time Complexity:

  • Using Binary Heap (Adjacency List): O(E log V)
  • Using Fibonacci Heap (Adjacency List): O(E + V log V)
  • Using Array (Adjacency Matrix): O(V^2)

# Floyd-Warshall Algorithm

Concept: Finds shortest paths between all pairs of vertices in a weighted graph. Can handle negative edge weights, but no negative cycles. It's a dynamic programming approach.

Pseudocode:

  1. Initialize a dist[V][V] matrix where dist[i][j] is:
    • w(i,j) if an edge exists between i and j .
    • 0 if i == j .
    • infinity if no direct edge exists.
  2. For k from 0 to V-1 (intermediate vertices):
    a. For i from 0 to V-1 (source vertices):
    b. For j from 0 to V-1 (destination vertices):
    i. dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])

Time Complexity: O(V^3)

This summary provides a comprehensive overview of the data structures and algorithms covered, along with their key properties, complexities, and pseudocode where relevant.