1. The O-notation provides an asymptotic upper bound. The ?-notation provides an asymptotic
lower bound. The Θ-notation asymptotically a function form above and below. O型符号提供一个渐近的上限。Θ符号提供一个渐近下界。 Θ-符号渐近函数形式的上方和下方。 2. To represent a heap as an array,the root of tree is A[1], and given the index i of a node, the indices of its parent Parent(i) { return ?i/2?; },left child, Left(i) { return 2*i; },right child, right(i) { return 2*i + 1; }.
代表一个堆中的一个数组,树的根节点是A[1],并且给出一个节点i,那么该节点的父节点是 左孩子 右孩子
3. Because the heap of n elements is a binary tree, the height of any node is at most ?(lg n).
因为n个元素的堆是一个二叉树,任意节点的树高最多是
4. In optimization problems , there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem.
在 最优化问题 中,有很多可能的解,每个解都有一个值,我们希望找到一个最优解(最大或最小),我们称这个解为最优解问题。
5. optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems.
最优子结构 中问题的最优解,至少包含它的最优解的子问题。
6. A subsequence of X if there exists a strictly increasing sequence
such that for all j = 1, 2, ..., k, we have xij = zj .
Let X =
(1). If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1 and Yn-1. (2). If xm ≠ yn, then zk ≠ xm implies that Z is an LCS of Xm-1 and Y. (3). If xm ≠ yn, then zk ≠ yn implies that Z is an LCS of X and Yn-1.
7. A greedy algorithm always makes the choice that looks best at the moment. That is, it makes a
locally optimal choice in the hope that this choice will lead to a globally optimal solution.
贪心算法 经常需要在某个时刻寻找最好的选择。正因如此,它在当下找到希望中的最优选择,以便引导出一个全局的最优解。
8. The greedy-choice property and optimal sub-structure are the two key ingredients of greedy
algorithm.
贪心选择 和最优子结构是贪心算法的两个重要组成部分。
9. When a recursive algorithm revisits the same problem over and over again, we say that the
optimization problem has overlapping subproblems.
当一个递归算法一遍一遍的遍历同一个问题时,我们说这个最优化问题是 重叠子问题。
10. greedy-choice property is a globally optimal solution can be arrived at by making a locally
optimal (greedy) choice.
贪心选择性质 是一个全局的最优解,这个最优解可以做一个全局的最优选择。
11. An approach of Matrix multiplication can develope a Θ(V4)-time algorithm for the all-pairs
shortest-paths problem and then improve its running time to Θ(V3 lg V).
一个矩阵相乘问题的解决可以一个 时间复杂度算法的所有路径的最短路径问题,改进后的时间复杂度是 。
12. Floyd-Warshall algorithm, runs in Θ(V3) time to solve the all-pairs shortest-paths problem.
FW算法在 时间复杂度下可以解决最短路径问题。
13. The running time of Quick Sort is O(n2) in the worst case, and O(n lg n) in the average case.
2
快速排序的平均时间复杂度是 O(n lg n) ,最坏时间复杂度是 O(n) 。
14. The MERGE(A,p,q,r) procedure in merge sort takes time Θ(n).
MERGE在归并排序中所花费的时间是 。
15. Given a weighted, directed graph G = (V, E) with source s and weight function w : E → R, the
Bellman-Ford algorithm makes |V| - 1 passes over the edges of the graph.
给一个带权重的有向图G = (V, E),权重关系w : E → R,则the Bellman-Ford算法需经过 条边。 16. The Bellman-Ford algorithm runs in time O(V E).
Bellman ford 算法的时间复杂度是 。
17. A decision tree represents the comparisons made by a comparison sort.The asymptotic height of
any decision tree for sorting n elements is ?(n lg n).
一个决策树代表一个比较类型,通过比较排序。N个元素的任意决策树的渐进高度是 。 True-false questions
1. An algorithm is said to be correct if, for some input instance, it halts with the correct output
F
如果给一个算法输入一些实例,并且它给出正确的输出,则认识这个算法是正确的。 2. Insertion sort always best merge sort F
插入排序总是优越与归并排序。
3. Θ(n lg n) grows more slowly than Θ(n2). Therefore, merge sort asymptotically beats insertion
sort in the worst case. T Θ(n lg n)
4. Currently computers are fast and computer memory is very cheap, we have no reason to study
algorithms. F
5. In RAM (Random-Access Machine) model, instructions are executed with concurrent
operations. F
6. The running time of an algorithm on a particular input is the number of primitive operations or
“steps” executed. T
7. Quick sorts, have no combining step: two subarrays form an already-sorted array. T
8. The running time of Counting sort is O(n + k). But the running time of sorting is ?(n lg n). So
this is contradiction(矛盾的). F
9. The Counting sort (计数排序)is stable. T 10. In the selection problem(选举问题),there is a algorithm of theoretical interest only with O(n)
worst-case running time. T
11. Divide-and-conquer algorithms partition the problem into independent subproblems, solve the
subproblems recursively, and then combine their solutions to solve the original problem. In
contrast, dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. T
12. In dynamic(动态) programming, we build an optimal solution to the problem from optimal
solutions to subproblems. T
13. The best-case running time is the longest running time for any input of size n. F
14. When we analyze the running time of an algorithm, we actually interested on the rate of growth
(order of growth). T
15. The dynamic programming(动态规划) approach means that it break the problem into several
subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem. T 16. Insertion sort and merge sort both use divide-and-conquer approach. F
17. Θ(g(n)) = { f (n) : there exist positive constants c1, c2, and n0 such that 0 ≤ c1 g(n) ≤ f (n) ≤ c2
g(n) for all n ≥ n0 }
18. Min-Heaps satisfy the heap property: A[Parent(i)] ? A[i] for all nodes i > 1. F 19. For array of length n, all elements in range A[?n/2? + 1 .. n] are heaps. T
20. The tighter bound of the running time to build a max-heap from an unordered array isn’t in
linear time. F
21. The call to BuildHeap() takes O(n) time, Each of the n - 1 calls to Heapify() takes O(lg n) time,
Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n)= O(n) + O(n lg n)= O(n lg n). T
22. Quick Sort is a dynamic programming algorithm. The array A[p..r] is partitioned into two
non-empty subarrays A[p..q] and A[q+1..r], All elements in A[p..q] are less than all elements in A[q+1..r], the subarrays are recursively sorted by calls to quicksort. F
23. Assume that we have a connected, undirected graph G = (V, E) with a weight function w : E→
R, and we wish to find a minimum spanning tree for G. Both Kruskal and Prim algorithms use a dynamic programming approach to the problem. F
24. A cut (S, V - S) of an undirected graph G = (V, E) is a partition(划分) of E. F
25. An edge is a light edge crossing a cut if its weight is the maximum of any edge crossing the cut.
F
26. Kruskal's algorithm uses a disjoint-set data structure to maintain several disjoint sets of elements.
T
27. Optimal-substructure property is a hallmark of the applicability of both dynamic programming.
T
28. Dijkstra's algorithm is a dynamic programming algorithm. F
29. Floyd-Warshall algorithm, which finds shortest paths between all pairs of vertices , is a greedy
algorithm. F
30. Given a weighted, directed graph G = (V, E) with weight function w : E → R, let p = vk_>be a shortest path from vertex v1 to vertex vk and, for any i and j such that 1 ≤ i ≤ j ≤k, let pij = 31. Given a weighted, directed graph G = (V, E) with weight function w : E → R,If there is a negative-weight cycle on some path from s to v , there exists a shortest-path from s to v. F 32. Since any acyclic path in a graph G = (V, E) contains at most |V| distinct vertices, it also contains at most |V| - 1 edges. Thus, we can restrict our attention to shortest paths of at most |V| - 1 edges. T 33. The process of relaxing an edge (u, v) tests whether we can improve the shortest path to v found so far by going through u. T 34. In Dijkstra's algorithm and the shortest-paths algorithm for directed acyclic graphs, each edge is relaxed exactly once. In the Bellman-Ford algorithm, each edge is also relaxed exactly once . F 35. The Bellman-Ford algorithm solves the single-source shortest-paths problem in the general case in which edge weights must be negative. F 36. Given a weighted, directed graph G = (V, E) with source s and weight function w : E → R, the Bellman-Ford algorithm can not return a Boolean value indicating whether or not there is a negative-weight cycle that is reachable from the source. F 37. Given a weighted, directed graph G = (V, E) with source s and weight function w : E → R, for the Bellman-Ford algorithm, if there is such a cycle, the algorithm indicates that no solution exists. If there is no such cycle, the algorithm produces the shortest paths and their weights. F 38. Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E) for the case in which all edge weights are negative. F 39. Dijkstra's algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E) for the case in which all edge weights are nonnegative. Bellman-Ford algorithm solves the single-source shortest-paths problem on a weighted, directed graph G = (V, E), the running time of Dijkstra's algorithm is lower than that of the Bellman-Ford algorithm. T 40. The steps for developing a dynamic-programming algorithm:1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom-up fashion. 4. Construct an optimal solution from computed information. T 三 Each of n input elements is an integer in the range 0 to k, Design a linear running time algorithm to sort n elements. 1 CountingSort(A, B, k) 2 for i=1 to k 3 C[i]= 0; 4 for j=1 to n 5 C[A[j]] += 1; 6 for i=2 to k 7 C[i] = C[i] + C[i-1]; 8 for j=n downto 1 9 B[C[A[j]]] = A[j]; 10 C[A[j]] -= 1; 四Design a expected linear running time algorithm to find the ith smallest element of n elements using divide and conquer strategy. 算法描述3分 The best-case running time is T(n) = c1n + c2(n - 1) + c4(n - 1) + c5(n - 1) + c8(n - 1) = (c1 + c2 + c4 + c5 + c8)n - (c2+ c4 + c5 + c8). This running time can be expressed as an + b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n. This worst-case running time can be expressed as an2 + bn + c for constants a, b, and c that again depend on the statement costs ci ; it is thus a quadratic function of n. 分析2分 算法描述2分 Θ(1) if n = 1 T(n) = 2T(n/2) + Θ(n) if n > 1. 递归方程和求解3分 五Write the INSERT-SORT procedure to sort into non-decreasing order. Analyze the running time of it with RAM Model. What’s the best-case running time, the worst-case running time and average case running time. Write the MERGE-SORT procedure to sort into non-decreasing order. Give the recurrence for the worst-case running time T(n) of Merge sort and find the solution to the recurrence. RAND-SELECT(A, p, r, i) (5分) if p = r then return A[p] q ← RAND-PARTITION(A, p, r) k ← q – p + 1 if i = k then return A[q] if i < k then return RAND-SELECT(A, p, q – 1, i ) else return RAND-SELECT(A, q + 1, r, i – k ) Randomized RANDOMIZED-PARTITION(A; p; r) (5分) { i ←RANDOM(p, r) exchange A[r] ← A[i] return PARTITION(A; p; r)} PARTITION(A; p; r) { x← A[r] i ←p-1 for j ← p to r-1 do if A[j] ≤ x then i ←i+1 搜索“diyifanwen.net”或“第一范文网”即可找到本站免费阅读全部范文。收藏本站方便下次阅读,第一范文网,提供最新小学教育算法复习题 全文阅读和word下载服务。
相关推荐: