AVL Tree

 AVL Tree can be defined as height balanced binary search tree in which each node is associated with a balance factor which is calculated by subtracting the height of its right sub-tree from that of its left sub-tree.

AVL tree is a self-balancing binary search tree in which each node maintains extra information called a balance factor whose value is either -1, 0 or +1.

Balance Factor

Balance factor of a node in an AVL tree is the difference between the height of the left subtree and that of the right subtree of that node.

Balance Factor = (Height of Left Subtree - Height of Right Subtree) or (Height of Right Subtree - Height of Left Subtree)

The self balancing property of an avl tree is maintained by the balance factor. The value of balance factor should always be -1, 0 or +1.

An example of a balanced avl tree is:

avl tree

Why?

AVL tree controls the height of the binary search tree by not letting it to be skewed. The time taken for all operations in a binary search tree of height h is O(h). However, it can be extended to O(n) if the BST becomes skewed (i.e. worst case). By limiting this height to log n, AVL tree imposes an upper bound on each operation to be O(log n) where n is the number of nodes.

AVL Rotation

In rotation operation, the positions of the nodes of a subtree are interchanged.

There are two types of rotations: LL, LR, RL, RR

Complexity

O(log n)

AVL Tree Applications

  • For indexing large records in databases
  • For searching in large databases

 

Dijkstra's Algorithm using Fibonacci Heaps (M5)

Dijkstra's Algorithm

Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph 

Dijkstra's Algorithm, which uses a priority queue to keep track of the vertices that have been visited and their current distances from the source.

One way to implement this priority queue is as a Fibonacci heap.

Algorithm

Let the distance of vertex Y be the distance from the source vertex to Y. Dijkstra's algorithm will assign some initial distance values and will try to improve them step by step.

  1. Mark all vertices unvisited. Create a set of all the unvisited vertices called the unvisited set.
  2. Assign to every vertex a tentative distance value: set it to zero for our source vertex and to infinity for all other vertices. Set the source vertex as current.
  3. For the current vertex, consider all of its unvisited neighbors and calculate their tentative distances through the current vertex by adding the weight of the edge connecting the current vertex and neighbor to the current vertex's assigned value. Compare the newly calculated tentative distance to the current assigned value and assign the smaller one.
  4. When we are done considering all of the unvisited neighbors of the current vertex, mark the current vertex as visited and remove it from the unvisited set. A visited vertex will never be checked again.
  5. Once all reachable vertices are marked, then stop.
  6. Otherwise, select the unvisited vertex that is marked with the smallest tentative distance, set it as the new "current vertex", and go back to step 3.

Fibonacci Heaps

Fibonacci heaps are sometimes referred to as “lazy” data structures because their structure is less strictly managed as they perform operations than in many other data structures. But this “laziness” allows them to have extremely fast amortized time complexities for common heap operations.

Operation Time Complexity
insert() O(1)
reportMin() O(1)
deleteMin() O(log n) amortized
delete() O(log n) amortized
decreaseKey() O(1) amortized
merge() O(1)

Fibonacci heaps are implemented using a circular and doubly-linked root list of heap-ordered trees in which each node contains a pointer to its parent and to one of its children. The children of each node are doubly-linked in a circular linked list. There is always a pointer to the node with the minimum key in the heap. It is this high connectivity between nodes that allows O(1) time insertion, minimum-reporting, key-decreasing, and merging. 

Putting Them Together

Dijkstra's original shortest path algorithm does not use a priority queue, and runs in O(V2) time. When using a Fibonacci heap as a priority queue, it runs in O(E + V log V) time, which is asymptotically the fastest known time complexity for this problem. However, due to their programming complexity, and for some practical purposes, Fibonacci heaps are not always a necessary or significant improvement. Fibonacci heaps are best reserved for applications in which the number of deletions is small compared to the use of other operations. In further iterations of this demo, the efficacy of different priority queue implementations could be tested with timing tests for graphs of different sizes and properties. For now, animation overhead erases any improvements made by the Fibonacci heap, and this demo solely shows a visualization of the two structures in action. 

Monk and Inversions

using System; public class Solution { public static void Main () { int T = Convert . ToInt32 ( Console . ReadLine...