1、1,Binomial heaps, Fibonacci heaps, and applications,2,Binomial trees,B0,B1,B(i-1),B(i-1),Bi,3,Binomial trees,B0,B1,B(i-2),B(i-1),. . . . . .,Bi,4,Properties of binomial trees,1) | Bk | = 2k,2) degree(root(Bk) = k,3) depth(Bk) = k,= The degree and depth of a binomial tree with at most n nodes is at m
2、ost log(n).,Define the rank of Bk to be k,5,Binomial heaps (def),A collection of binomial trees at most one of every rank. Items at the nodes, heap ordered.,Possible rep: Doubly link roots and children of every nodes. Parent pointers needed for delete.,6,Binomial heaps (operations),Operations are de
3、fined via a basic operation, called linking, of binomial trees: Produce a Bk from two Bk-1, keep heap order.,7,Binomial heaps (ops cont.),Basic operation is meld(h1,h2):,Like addition of binary numbers.,B0,B1,B3,B4,B0,B3,h1:,h2:,+,B1,B4,B2,B2,B4,B4,B5,B5,8,Binomial heaps (ops cont.),Findmin(h): obvi
4、ous Insert(x,h) : meld a new heap with a single B0 containing x, with h,deletemin(h) : Chop off the minimal root. Meld the subtrees with h. Update minimum pointer if needed. delete(x,h) : Bubble up and continue like delete-min decrease-key(x,h,) : Bubble up, update min ptr if needed,All operations t
5、ake O(log n) time on the worst case, except find-min(h) that takes O(1) time.,9,Amortized analysis,We are interested in the worst case running time of a sequence of operations.,Example: binary counter single operation - increment,00000,00001,00010,00011,00100,00101,10,Amortized analysis (Cont.),On t
6、he worst case increment takes O(k). k = #digits,What is the complexity of a sequence of increments (on the worst case) ?,Define a potential of the counter:,Amortized(increment) = actual(increment) + , (c) = ?,11,Amortized analysis (Cont.),Amortized(increment1) = actual(increment1) + 1- 0,Amortized(i
7、ncrement2) = actual(increment2) + 2- 1,Amortized(incrementn) = actual(incrementn) + n- (n-1), ,iAmortized(incrementi) = iactual(incrementi) + n- 0,iAmortized(incrementi) iactual(incrementi) if n- 0 0,12,Amortized analysis (Cont.),Define a potential of the counter:, (c) = #(ones),Amortized(increment)
8、 = actual(increment) + ,Amortized(increment) = 1+ #(1 = 0) + 1 - #(1 = 0) = O(1),= Sequence of n increments takes O(n) time,13,Binomial heaps - amortized ana., (collection of heaps) = #(trees),Amortized cost of insert O(1) Amortized cost of other operations still O(log n),14,Binomial heaps + lazy me
9、ld,Allow more than one tree of each rank.,Meld (h1,h2) : Concatenate the lists of binomial trees. Update the minimum pointer to be the smaller of the minimums,O(1) worst case and amortized.,15,Binomial heaps + lazy meld,As long as we do not do a delete-min our heaps are just doubly linked lists:,4,1
10、1,6,9,5,9,Delete-min : Chop off the minimum root, add its children to the list of trees. Successive linking: Traverse the forest keep linking trees of the same rank, maintain a pointer to the minimum root.,16,Binomial heaps + lazy meld,Possible implementation of delete-min is using an array indexed
11、by rank to keep at most one binomial tree of each rank that we already traversed. Once we encounter a second tree of some rank we link them and keep linking until we do not have two trees of the same rank. We record the resulting tree in the array,Amortized(delete-min) = = (#(trees at the end) + #li
12、nks + max-rank) - #links (2log(n) + #links) - #links = O(log(n),17,Binomial heaps + lazy delete,Allow more than one tree of each rank.,Meld (h1,h2), Insert(x,h) - as before Delete(x,h) : simply mark x as deleted. Deletemin(h) : y = findmin(h) ; delete(y,h),How do we do findmin ?,18,Binomial heaps +
13、lazy delete,Traverse the trees top down purging deleted nodes and stopping at each non-deleted node,Do successive linking on the forest you obtain.,19,Binomial heaps + lazy delete,20,Binomial heaps + lazy delete,21,Binomial heaps + lazy delete (ana.),What is the amortized cost of find-min ?,Modify t
14、he potential a little: (collection of heaps) = #(trees) + #(deleted nodes),Insert, meld, delete : O(1) delete-min : like find-min,22,Binomial heaps + lazy delete (ana.),What is the amortized cost of find-min ?,Actual(purge) = #(nodes purged) + #(new trees),(purge) = #(new trees) - #(nodes purged),am
15、ortized(find-min) = amortized(purging) + amortized(successive linking + scan of undeleted nodes),We saw that: amortized(successive linking) = O(log(n),Amortized(purge) = actual(purge) + (purge),So, amortized(find-min) = O(log(n) + #(new trees) ),23,Binomial heaps + lazy delete (ana.),How many new tr
16、ees are created by the purging step ?,Proof.,Suppose the i-th purged node, 1 i p, had ki undeleted children. One of them has degree at least ki-1. Therefore in its subtree there are at least 2(ki-1) nodes.,Let p = #(nodes purged), n = total #(nodes),Then #(new trees) = O( p*(log(n/p)+ 1) ),So, amort
17、ized(find-min) = O( p*(log(n/p)+ 1) ),24,Binomial heaps + lazy delete (ana.),Proof (cont).,How large can k1+k2+ . . . +kp be such that i=1 2(ki-1) n ?Make all ki equal log(n/p) + 1, then i ki = p*(log(n/p)+ 1),p,25,Application: The round robin algorithm of Cheriton and Tarjan (76) for MST,We shall u
18、se a Union-Find data structure. The union find problem is where we want to maintain a collection of disjoint sets under the operations 1) S=Union(S1,S2) 2) S=find(x) Can do it in O(1) amortized time for union and O(k,n) amortized time for find, where k is the # of finds, and n is the number of items
19、 (assuming k n).,26,A greedy algorithm for MST,Start with a forest where each tree is a singleton. Repeat the following step until there is only one tree in the forest: Pick T F, pick a minimum cost edge e connecting a vertex of T to a vertex in T, add e to the forest (merge T and T to one tree),Pri
20、ms algorithm: picks always the same T,Kruskals algorithm: picks the lightest edge out of F,27,Cheriton & Tarjans ideas,Keep the trees in a queue, pick the first tree, T, in the queue, pick the lightest edge e connecting it to another tree T. Remove T from the queue, connect T and T with e. Add the r
21、esulting tree to the end of the queue.,28,Cheriton & Tarjan (cont.),T,e,T,T,T,e,29,Cheriton & Tarjan (implementation),The vertices of each tree T will be a set in a Union-Find data structure. Denote it also by T,Edges with one endpoint in T are stored in a heap data structure. Denoted by h(T). We us
22、e binomial queues with lazy meld and deletion.,Find e by doing find-min on h(T). Let e=(v,w). Find T by doing find(w). Then create the new tree by T= union(T,T) and h(T) = meld(h(T),h(T),30,Cheriton & Tarjan (implementation),Note: The meld implicitly delete edges. Every edge in h(T) with both endpoi
23、nts in T is considered as “marked deleted”. We never explicitly delete edges! We can determine whether an edge is deleted or not by two find operations.,31,Cheriton & Tarjan (analysis),Assume for the moment that find costs O(1) time. Then we can determine whether a node is marked deleted in O(1) tim
24、e, and our analysis is still valid. So, we have at most 2m implicit delete operations that cost O(m).at most n find operations that cost O(n).at most n meld and union operations that cost O(n).at most n find-min operations. The complexity of these find-min operations dominates the complexity of the
25、algorithm.,32,Cheriton & Tarjan (analysis),Let mi be the number of edges in the heap at the i-th iteration. Let pi be the number of deleted edges purged from the heap at the find-min performed by the i-th iteration. So, we proved that the i-th find-min costs O(pi *(log(mi / pi)+ 1) ). We want to bou
26、nd the sum of these expressions.,We will bound i mi first.,33,Cheriton & Tarjan (analysis),Divide the iterations into passes as follows. Pass 1 is when we remove the original singleton trees from the queue. Pass i is when we remove trees added to the queue at pass i-1.,What is the size of a tree rem
27、oved from the queue at pass j ?,At least 2j . (Prove by induction),So how many passes are there ?,At most log(n),34,Cheriton & Tarjan (analysis),An edge can occur in at most two heaps of trees in one pass.,So i mi 2m log(n),Recall we want to bound O(i pi *(log(mi / pi)+ 1) ). 1) Consider all find-mi
28、ns such that pi mi / log2(n): O(i pi *(log(mi / pi)+ 1) ) = O(i pi log log(n) = O(m loglog(n) 2) Consider all find-mins such that pi mi / log2(n): O(i pi *(log(mi / pi)+ 1) ) = O(i (mi / log2(n) log(mi) = O(m),35,Cheriton & Tarjan (analysis),We obtained a time bound of O(m loglog(n) under the assump
29、tion that find takes O(1) time. But if you amortize the cost of the finds on O(m loglog(n) operations then the cost per find is really (m loglog(n),n) = O(1),36,Fibonacci heaps (Fredman & Tarjan 84),Want to do decrease-key(x,h,) faster than delete+insert. Ideally in O(1) time.Why ?,37,Dijkstras shor
30、test path algorithm,Let G = (V,E) be a weighted (weights are non-negative) undirected graph, let s V. Want to find the distance (length of the shortest path), d(s,v) from s to every other vertex.,s,3,3,2,3,2,1,38,Dijkstras shortest path algorithm,Dijkstra: Maintain an upper bound d(v) on d(s,v). Eve
31、ry vertex is either scanned, labeled, or unlabeled. Initially: d(s) = 0 and d(v) = for every v s. s is labeled and all others are unlabeled. Pick a labeled vertex with d(v) minimum. Make v scanned. For every edge (v,w) if d(v) + w(v,w) d(w) then 1) d(w) := d(v) + w(v,w) 2) label w if it is not label
32、ed already,39,Dijkstras shortest path algorithm (implementation),Maintain the labeled vertices in a heap, using d(v) as the key of v.We perform n delete-min operations and n insert operations on the heap. O(n log(n) For each edge we may perform a decrease-key. With regular heaps O(m log (n). But if
33、you can do decrease-key in O(1) time then you can implement Dijkstras algorithm to run in O(n log(n) + m) time !,40,Back to Fibonacci heaps,Suggested implementation for decrease-key(x,h,): If x with its new key is smaller than its parent, cut the subtree rooted at x and add it to the forest. Update
34、the minimum pointer if necessary.,41,42,Decrease-key (cont.),Does it work ?,Obs1: Trees need not be binomial trees any more,Do we need the trees to be binomial ? Where have we used it ?,In the analysis of delete-min we used the fact that at most log(n) new trees are added to the forest. This was obv
35、ious since trees were binomial and contained at most n nodes.,43,Decrease-key (cont.),Such trees are now legitimate. So our analysis breaks down.,44,Fibonacci heaps (cont.),We shall allow non-binomial trees, but will keep the degrees logarithmic in the number of nodes. Rank of a tree = degree of the
36、 root. Delete-min: do successive linking of trees of the same rank and update the minimum pointer as before. Insert and meld also work as before.,45,Fibonacci heaps (cont.),Decrease-key (x,h,): indeed cuts the subtree rooted by x if necessary as we showed. in addition we maintain a mark bit for ever
37、y node. When we cut the subtree rooted by x we check the mark bit of p(x). If it is set then we cut p(x) too. We continue this way until either we reach an unmarked node in which case we mark it, or we reach the root. This mechanism is called cascading cuts.,46,2,4,5,9,10,12,20,8,11,6,14,16,15,47,Fi
38、bonacci heaps (delete),Delete(x,h) : Cut the subtree rooted at x and then proceed with cascading cuts as for decrease key. Chop off x from being the root of its subtree and add the subtrees rooted by its children to the forest If x is the minimum node do successive linking,48,Fibonacci heaps (analys
39、is), (collection of heaps) = #(trees) + 2#(marked nodes),Want everything to be O(1) time except for delete and delete-min. = cascading cuts should pay for themselves,Actual(decrease-key) = O(1) + #(cascading cuts),(decrease-key) = O(1) - #(cascading cuts),= amortized(decrease-key) = O(1) !,49,Fibona
40、cci heaps (analysis),Cascading cuts and successive linking will pay for themselves. The only question is what is the maximum degree of a node ? How many trees are being added into the forest when we chop off a root ?,What about delete and delete-min ?,50,Fibonacci heaps (analysis),Lemma 1 : Let x be
41、 any node in an F-heap. Arrange the children of x in the order they were linked to x, from earliest to latest. Then the i-th child of x has rank at least i-2.,1,2,51,Fibonacci heaps (analysis),Corollary1 : A node x of rank k in a F-heap has at least k descendants, where = (1 + 5)/2 is the golden rat
42、io.,Proof: Let sk be the minimum number of descendants of a node of rank k in a F-heap. By Lemma 1 sk i=0si + 2,k-2,s0=1, s1= 2,52,Fibonacci heaps (analysis),Proof (cont): Fibonnaci numbers satisfy Fk+2 = i=2Fi + 2, for k 2, and F2=1 so by induction sk Fk+2 It is well known that Fk+2 k,k,It follows
43、that the maximum degree k in a F-heap with n nodes is such that k n so k log(n) / log() = 1.4404 log(n),53,Application #2 : Prims algorithm for MST,Start with T a singleton vertex. Grow a tree by repeating the following step: Add the minimum cost edge connecting a vertex in T to a vertex out of T.,5
44、4,Application #2 : Prims algorithm for MST,Maintain the vertices out of T but adjacent to T in a heap. The key of a vertex v is the weight of the lightest edge (v,w) where w is in the tree. Iteration: Do a delete-min. Let v be the minimum vertex and (v,w) the lightest edge as above. Add (v,w) to T.
45、For each edge (w,u) where uT, if key(u) = insert u into the heap with key(u) = w(w,u) if w(w,u) key(u) decrease the key of u to be w(w,u).,With regular heaps O(m log(n). With F-heaps O(n log(n) + m).,55,Thin heaps (K, Tarjan 97),A variation of Fibonacci heaps where trees are “almost” binomial. In pa
46、rticular they have logarithmic depth. You also save a pointer and a bit per node. So they should be more efficient in practice.,A thin binomial tree is a binomial tree where each nonroot and nonleaf node may have lost its leftmost child,56,Thin binomial trees,A thin binomial tree is a binomial tree
47、where each nonroot and nonleaf node may have lost its leftmost child,B(i-1),B0,B1,B(i-2),. . . . . .,Bi,B(i-3),. .,57,Thin binomial trees (cont),So either rank(x) = degree(x) or rank(x) = degree(x) + 1 In the latter case we say that the node is marked,B(i-1),B0,B1,B(i-2),. . . . . .,Bi,B(i-3),. .,58
48、,Thin heaps,Thin heaps maintain the tree to be thin binomial tree by changing the way we do cascading cuts.,59,Cascading cuts,B(i-1),B0,B1,B(i-2),. . . . . .,Bi,B(i-3),. .,We may get an illegal “hole”,60,Cascading cuts,B(i-1),B0,B1,B(i-2),. . . . . .,Bi,B(i-3),. .,Or a “rank violation”,61,How do we fix a “hole” ?,B(i-1),B0,B1,B(i-2),. . . . . .,Bi,B(i-3),. .,Two cases: depends upon whether the left sibling is marked or not,62,How do we fix a “hole” ?,B(i-1),B0,B1,B(i-2),. . . . . .,Bi,B(i-3),. .,If it is marked the unmark it,