算法导论第三版 第4章习题答案

article/2025/10/19 21:15:49

2020/10/31:初稿,参考https://walkccc.github.io/CLRS/Chap04/4.1/,并增加相应的Python代码.

4 Divide-and-Conquer

4.1 The maximum-subarray problem

1.What does FIND-MAXIMUM-SUBARRAY return when all elements of A are negative?
It will return a single-element array with the largest negative integer.

2.Write pseudocode for the brute-force method of solving the maximum-subarray problem. Your procedure should run in O(n^2) time.

FIND-MAX-SUBARRAY(A, low, high)left = 0right = 0sum = -∞for i = low to highcurrent-sum = 0for j = i to highcurrent-sum += A[j]if sum < current-sumsum = current-sumleft = iright = jreturn (left, right, sum)

Python Implementation:

def find_max_crossing_subarray(A,low,mid,high):# Defining a negative infinite integerleft_sum = float("-inf")sum = 0for i in range(mid,low -1 ,-1):sum += A[i]if sum > left_sum:left_sum = summax_left = iright_sum = float("-inf")sum = 0for j in range(mid+1, high + 1):sum += A[j]if sum > right_sum:right_sum = summax_right = jreturn (max_left, max_right, left_sum + right_sum)def find_max_subarray(A,low,high):if high == low:return (low,high,A[low])else:mid = (low + high) // 2left_low, left_high, left_sum = find_max_subarray(A, low, mid)right_low, right_high, right_sum = find_max_subarray(A, mid + 1, high)cross_low, cross_high, cross_sum = find_max_crossing_subarray(A, low, mid, high)if left_sum >= right_sum and left_sum >=cross_sum:return (left_low,left_high,left_sum)elif right_sum >= left_sum and right_sum >=cross_sum:return (right_low,right_high,right_sum)else:return (cross_low,cross_high,cross_sum)nums=[13,-3,-25,20,-3,-16,-23,18,20,-7,12,-5,-22,15,-4,7]
print(find_max_subarray(nums,0,len(nums)-1))nums=[-2,1,-3,4,-1,2,1,-5,4]
print(find_max_subarray(nums,0,len(nums)-1))

3.Implement both the brute-force and recursive algorithms for the maximum-subarray problem on your own computer. What problem size n_0​ gives the crossover point at which the recursive algorithm beats the brute-force algorithm? Then, change the base case of the recursive algorithm to use the brute-force algorithm whenever the problem size is less than n_0​. Does that change the crossover point?

I think the value n_0 varies on different computers and different arrays. To me n_0=47. Python implemention for the comparision:

import timedef find_max_subarray_bf(A,low,high):left = 0right = 0sum = float("-inf")for i in range(0, high + 1):current_sum = 0for j in range(i, high + 1):#get sum of nums[i..j]current_sum += A[j]if sum < current_sum:sum = current_sumleft = iright = jreturn (left,right,sum)def find_max_crossing_subarray(A,low,mid,high):# Defining a negative infinite integerleft_sum = float("-inf")sum = 0for i in range(mid,low -1 ,-1):sum += A[i]if sum > left_sum:left_sum = summax_left = iright_sum = float("-inf")sum = 0for j in range(mid+1, high + 1):sum +=A[j]if sum > right_sum:right_sum = summax_right = jreturn (max_left, max_right, left_sum + right_sum)def find_max_subarray_rc(A,low,high):if high == low:return (low,high,A[low])else:mid = (low + high) // 2left_low, left_high, left_sum = find_max_subarray_rc(A, low, mid)right_low, right_high, right_sum = find_max_subarray_rc(A, mid + 1, high)cross_low, cross_high, cross_sum = find_max_crossing_subarray(A, low, mid, high)if left_sum >= right_sum and left_sum >=cross_sum:return (left_low,left_high,left_sum)elif right_sum >= left_sum and right_sum >=cross_sum:return (right_low,right_high,right_sum)else:return (cross_low,cross_high,cross_sum)A = [20, -21, 43, -23, -92, 45, -56, -5, 34, -17,34, 65, 89, -109, 125, 2, -10, 89, 46, 65, -49, 3, -45, 34, 76, 32, -76, -2, 3, -45, 44, 34, 67, -67, 99, -104, 11, -37, 44, 76, -90, 89, -32, 34, 88, 56, -6, -89, -90, -34, -56, 23, 29, 2, 6, 9, 2, -34, -45, 34, 22, -177, 44, 90, -45, -36, 55, 23, 56, -56, -167, -54, 23, 45, 14, 62, -46, -56, -34, 45, 32, 20, -87, 39, 82, 95, -67, -45, 88, -36, 21, 18, 16, 81, -102, 99, -45, -67, -45, -76]for n in range(1, len(A)):start = time.process_time()for i in range(0, 10000):find_max_subarray_bf(A, 0, n-1)stop = time.process_time()duration_bf = stop - startstart = time.process_time()for i in range(0, 10000):find_max_subarray_rc(A, 0, n-1)stop = time.process_time()duration_rc = stop - startprint('n={},duration of bf and rc:{} and {}'.format(n,duration_bf,duration_rc))if duration_bf > duration_rc:print('recursive beats brute-force when n={}'.format(n))def find_max_subarray_mixed(A,low,high):if high - low < 47:return find_max_subarray_bf(A, low, high)else:mid = (low + high ) //2left_low,left_high,left_sum = find_max_subarray_mixed(A, low, mid)right_low,right_high,right_sum = find_max_subarray_mixed(A, mid, high)cross_low,cross_high,cross_sum = find_max_crossing_subarray(A, low, mid, high)if left_sum >= right_sum and left_sum >=cross_sum:return (left_low,left_high,left_sum)elif right_sum >= left_sum and right_sum >=cross_sum:return (right_low,right_high,right_sum)else:return (cross_low,cross_high,cross_sum)A = [20, -21, 43, -23, -92, 45, -56, -5, 34, -17,34, 65, 89, -109, 125, 2, -10, 89, 46, 65, -49, 3, -45, 34, 76, 32, -76, -2, 3, -45, 44, 34, 67, -67, 99, -104, 11, -37, 44, 76, -90, 89, -32, 34, 88, 56, -6, -89, -90, -34, -56, 23, 29, 2, 6, 9, 2, -34, -45, 34, 22, -177, 44, 90, -45, -36, 55, 23, 56, -56, -167, -54, 23, 45, 14, 62, -46, -56, -34, 45, 32, 20, -87, 39, 82, 95, -67, -45, 88, -36, 21, 18, 16, 81, -102, 99, -45, -67, -45, -76]for n in range(1, len(A)):start = time.process_time()for i in range(0, 10000):bf_result = find_max_subarray_bf(A, 0, n-1)stop = time.process_time()duration_bf = stop - startstart = time.process_time()for i in range(0, 10000):mix_result = find_max_subarray_mixed(A, 0, n-1)stop = time.process_time()duration_mix = stop - startprint('n={},duration of bf and mixed:{} and {}'.format(n,duration_bf,duration_mix))print('bf result:{},mixed result:{}'.format(bf_result, mix_result))if duration_bf > duration_mix:print('mixed beats brute-force when n={}'.format(n))

If the algorithm is modified to used divide and conquer when n≥47 and the brute-force approach when n is less, the performance at the crossover point almost doubles. The performance at n_0 - 1 stays the same, though (or even gets worse, because of the added overhead).

What I find interesting is that if we set n_0 - 1 and used the mixed approach to sort 40 elements, it is still faster than both.

4.Suppose we change the definition of the maximum-subarray problem to allow the result to be an empty subarray, where the sum of the values of an empty subarray is 00. How would you change any of the algorithms that do not allow empty subarrays to permit an empty subarray to be the result?

If the original algorithm returns a negative sum, returning an empty subarray instead.

5.Use the following ideas to develop a nonrecursive, linear-time algorithm for the maximum-subarray problem. Start at the left end of the array, and progress toward the right, keeping track of the maximum subarray seen so far. Knowing a maximum subarray A[1..j], extend the answer to find a maximum subarray ending at index j + 1 by using the following observation: a maximum subarray A[i..j + 1], for some 1≤i≤j+1. Determine a maximum subarray of the form A[i..j+1] in constant time based on knowing a maximum subarray ending at index j.

ITERATIVE-FIND-MAXIMUM-SUBARRAY(A)n = A.lengthmax-sum = -∞sum = -∞for j = 1 to ncurrentHigh = jif sum > 0sum = sum + A[j]elsecurrentLow = jsum = A[j]if sum > max-summax-sum = sumlow = currentLowhigh = currentHighreturn (low, high, max-sum)

Python Implementation:

def find_max_subarray_dp(A):n = len(A)max_sum = float("-inf")current_sum = float("-inf")low = float("-inf")high = float("-inf")for j in range(0, n):current_high = jif current_sum > 0:current_sum = current_sum + A[j]else:current_sum = A[j]current_low = jif current_sum > max_sum:max_sum = current_sumlow = current_lowhigh = current_highreturn (low,high,max_sum)nums=[13,-3,-25,20,-3,-16,-23,18,20,-7,12,-5,-22,15,-4,7]
print(find_max_subarray_dp(nums))nums=[-2,1,-3,4,-1,2,1,-5,4]
print(find_max_subarray_dp(nums))

4.2 Strassen's algorithm for matrix multiplication

1.Use Strassen's algorithm to compute the matrix product

\begin{pmatrix} 1 & 3 \\ 7 & 5 \end{pmatrix} \begin{pmatrix} 6 & 8 \\ 4 & 2 \end{pmatrix}

Show your work.

The first matrices are

\begin{array}{ll} S_1 = 6 & S_6 = 8 \\ S_2 = 4 & S_7 = -2 \\ S_3 = 12 & S_8 = 6 \\ S_4 = -2 & S_9 = -6 \\ S_5 = 6 & S_{10} = 14. \end{array}

The result is

\begin{pmatrix} 18 & 14 \\ 62 & 66 \end{pmatrix} .

2.Write pseudocode for Strassen's algorithm.

STRASSEN(A, B)n = A.rowsif n == 1return a[1, 1] * b[1, 1]let C be a new n × n matrixA[1, 1] = A[1..n / 2][1..n / 2]A[1, 2] = A[1..n / 2][n / 2 + 1..n]A[2, 1] = A[n / 2 + 1..n][1..n / 2]A[2, 2] = A[n / 2 + 1..n][n / 2 + 1..n]B[1, 1] = B[1..n / 2][1..n / 2]B[1, 2] = B[1..n / 2][n / 2 + 1..n]B[2, 1] = B[n / 2 + 1..n][1..n / 2]B[2, 2] = B[n / 2 + 1..n][n / 2 + 1..n]S[1] = B[1, 2] - B[2, 2]S[2] = A[1, 1] + A[1, 2]S[3] = A[2, 1] + A[2, 2]S[4] = B[2, 1] - B[1, 1]S[5] = A[1, 1] + A[2, 2]S[6] = B[1, 1] + B[2, 2]S[7] = A[1, 2] - A[2, 2]S[8] = B[2, 1] + B[2, 2]S[9] = A[1, 1] - A[2, 1]S[10] = B[1, 1] + B[1, 2]P[1] = STRASSEN(A[1, 1], S[1])P[2] = STRASSEN(S[2], B[2, 2])P[3] = STRASSEN(S[3], B[1, 1])P[4] = STRASSEN(A[2, 2], S[4])P[5] = STRASSEN(S[5], S[6])P[6] = STRASSEN(S[7], S[8])P[7] = STRASSEN(S[9], S[10])C[1..n / 2][1..n / 2] = P[5] + P[4] - P[2] + P[6]C[1..n / 2][n / 2 + 1..n] = P[1] + P[2]C[n / 2 + 1..n][1..n / 2] = P[3] + P[4]C[n / 2 + 1..n][n / 2 + 1..n] = P[5] + P[1] - P[3] - P[7]return C

3.How would you modify Strassen's algorithm to multiply n×n matrices in which nn is not an exact power of 2? Show that the resulting algorithm runs in time Θ(nlg7).

We can just extend it to n×n matrix and pad it with zeroes. It's obviously Θ(nlg7).

4.What is the largest k such that if you can multiply 3×3 matrices using k multiplications (not assuming commutativity of multiplication), then you can multiply n×n matrices is time o(nlg7)? What would the running time of this algorithm be?

Assume n = 3^m for some m. Then, using block matrix multiplication, we obtain the recursive running time T(n) = kT(n / 3) + O(1).

By master theorem, we can find the largest k to satisfy \log_3 k < \lg 7 is k=21.

5.V. Pan has discovered a way of multiplying 68 \times 68 matrices using 132464 multiplications, a way of multiplying 70×70 matrices using 143640 multiplications, and a way of multiplying 72×72 matrices using 155424 multiplications. Which method yields the best asymptotic running time when used in a divide-and-conquer matrix-multiplication algorithm? How does it compare to Strassen's algorithm?

Using what we know from the last exercise, we need to pick the smallest of the following

\begin{aligned} \log_{68} 132464 & \approx 2.795128 \\ \log_{70} 143640 & \approx 2.795122 \\ \log_{72} 155424 & \approx 2.795147. \end{aligned}

The fastest one asymptotically is 70×70 using 143640.

6.How quickly can you multiply a kn×n matrix by an n×kn matrix, using Strassen's algorithm as a subroutine? Answer the same question with the order of the input matrices reversed.

  • (kn×n)(n×kn) produces a kn×kn matrix. This produces k^2 multiplications of n×n matrices.
  • (n×kn)(kn×n) produces an n×n matrix. This produces k multiplications and k−1 additions.

7.Show how to multiply the complex numbers a+bi and c+di using only three multiplications of real numbers. The algorithm should take a, b, c and d as input and produce the real component ac−bd and the imaginary component ad+bc separately.

4.3 The substitution method for solving recurrences

1.Show that the solution of T(n) = T(n - 1) + n is O(n^2).

2.Show that the solution of T(n)=T(⌈n/2⌉)+1 is O(lgn).

3.We saw that the solution of T(n)=2T(⌊n/2⌋)+n is O(nlgn). Show that the solution of this recurrence is also Ω(nlgn). Conclude that the solution is Θ(nlgn).

4.Show that by making a different inductive hyptohesis, we can overcome the difficulty with the boundary condition T(1)=1 for recurrence (4.19) without adjusting the boundary conditions for the inductive proof.

5.Show that Θ(nlgn) is the solution to the "exact" recurrence (4.3) for merge sort.

The recurrence is

T(n) = T(\lceil n / 2 \rceil) + T(\lfloor n / 2 \rfloor) + \Theta(n)

To show Θ bound, separately show O and Ω bounds.

6.Show that the solution toT(n)=2T(⌊n/2⌋+17)+n is O(nlgn).

7.Using the master method in Section 4.5, you can show that the solution to the recurrence T(n)=4T(n/3)+n is T(n)=\Theta(n\log_{3}{ 4}). Show that a substitution proof with the assumption T(n) \le cn^{\log_3 4} fails. Then show how to subtract off a lower-order term to make the substitution proof work.

8.Using the master method in Section 4.5, you can show that the solution to the recurrence T(n)=4T(n/2)+n^2 is T(n) = \Theta(n^2). Show that a substitution proof with the assumption T(n) \le cn^2 fails. Then show how to subtract off a lower-order term to make the substitution proof work.

9.Solve the recurrence T(n) = 3T(\sqrt n) + \log n by making a change of variables. Your solution should be asymptotically tight. Do not worry about whether values are integral.

4.4 The recursion-tree method for solving recurrences

1.Use a recursion tree to determine a good asymptotic upper bound on the recurrence T(n)=3T(⌊n/2⌋)+n. Use the substitution method to verify your answer.

2.Use a recursion tree to determine a good asymptotic upper bound on the recurrence T(n) = T(n / 2) + n^2. Use the substitution method to verify your answer.

3.Use a recursion tree to determine a good asymptotic upper bound on the recurrence T(n)=4T(n/2+2)+n. Use the substitution method to verify your answer.

4.Use a recursion tree to determine a good asymptotic upper bound on the recurrenceT(n)=2T(n−1)+1. Use the substitution method to verify your answer.

5.Use a recursion tree to determine a good asymptotic upper bound on the recurrence T(n)=T(n−1)+T(n/2)+n. Use the substitution method to verify your answer.

6.Argue that the solution to the recurrence T(n) = T(n / 3) + T(2n / 3) + cn, where c is a constant, is Ω(nlgn) by appealing to the recursion tree.

We know that the cost at each level of the tree is cncn by examining the tree in figure 4.6. To find a lower bound on the cost of the algorithm, we need a lower bound on the height of the tree.

The shortest simple path from root to leaf is found by following the leftest child at each node. Since we divide by 33 at each step, we see that this path has length \log_3 n. Therefore, the cost of the algorithm is

cn(\log_3 n + 1) \ge cn\log_3 n = \frac{c}{\log_3} n\log n = \Omega(n\log n).

7.Draw the recursion tree for T(n)=4T(⌊n/2⌋)+cn, where c is a constant, and provide a tight asymptotic bound on its solution. Verify your answer with the substitution method.

8.Use a recursion tree to give an asymptotically tight solution to the recurrence T(n)=T(n−a)+T(a)+cn, wherea≥1 and c>0 are constants.

9.Use a recursion tree to give an asymptotically tight solution to the recurrence T(n)=T(αn)+T((1−α)n)+cn, where α is a constant in the range 0<α<1, and c>0 is also a constant.

4.5 The master method for solving recurrences

1.Use the master method to give tight asymptotic bounds for the following recurrences:

a. T(n) = 2T(n / 4) + 1

b. T(n) = 2T(n / 4) + \sqrt n

c. T(n) = 2T(n / 4) + n

d. T(n) = 2T(n / 4) + n^2

 

a. \Theta(n^{\log_4 2}) = \Theta(\sqrt n)

b.\Theta(n^{\log_4 2}\lg n) = \Theta(\sqrt n\lg n)

c. \Theta(n)Θ(n)

d. \Theta(n^2)Θ(n2).

2.Professor Caesar wishes to develop a matrix-multiplication algorithm that is asymptotically faster than Strassen's algorithm. His algorithm will use the divide-and-conquer method, dividing each matrix into pieces of size n / 4 \times n / 4, and the divide and combine steps together will take \Theta(n^2) time. He needs to determine how many subproblems his algorithm has to create in order to beat Strassen's algorithm. If his algorithm creates aa subproblems, then the recurrence for the running time T(n) becomes T(n) = aT(n / 4) + \Theta(n^2). What is the largest integer value of aa for which Professor Caesar's algorithm would be asymptotically faster than Strassen's algorithm?

Strassen's algorithm has running time of Θ(nlg7). The largest integer aa such that \log_4 a < \lg 7 is a = 48.

3.Use the master method to show that the solution to the binary-search recurrence T(n)=T(n/2)+Θ(1) is T(n)=Θ(lgn). (See exercise 2.3-5 for a description of binary search.)

4.Can the master method be applied to the recurrence T(n) = 4T(n / 2) + n^2\lg n? Why or why not? Give an asymptotic upper bound for this recurrence.

5.Consider the regularity condition af(n/b)≤cf(n) for some constant c<1, which is part of case 3 of the master theorem. Give an example of constants a≥1 and b>1 and a function f(n) that satisfies all the conditions in case 3 of the master theorem, except the regularity condition.

a=1, b=2 and f(n)=n(2−cosn).

4.6 Proof of the master theorem

1.Give a simple and exact expression for n_j in equation (4.27) for the case in which b is a positive integer instead of an arbitrary real number.

2.Show that if f(n) = \Theta(n^{\log_b a}\lg^k{n}), where k≥0, then the master recurrence has solution T(n) = \Theta(n^{\log_b a}\lg^{k + 1}n). For simplicity, confine your analysis to exact powers of bb.

3.Show that case 3 of the master method is overstated, in the sense that the regularity conditionaf(n/b)≤cf(n) for some constant c < 1 implies that there exists a constant ϵ>0 such that f(n)=\Omega(n^{\log_b a + \epsilon}).

Problems

1.Give asymptotic upper and lower bound for T(n) in each of the following recurrences. Assume that T(n) is constant forn≤2. Make your bounds as tight as possible, and justify your answers.

2.Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N-element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies:

  1. An array is passed by pointer. Time =Θ(1).
  2. An array is passed by copying. Time =Θ(N), where N is the size of the array.
  3. An array is passed by copying only the subrage that might be accessed by the called procedure. Time =Θ(q−p+1) if the subarray A[p..q] is passed.

a. Consider the recursive binary search algorithm for finding a number in a sorted array (see Exercise 2.3-5). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let NN be the size of the original problems and nn be the size of a subproblem.

b. Redo part (a) for the \MERGE-SORT algorithm from Section 2.3.1.

3.Give asymptotic upper and lower bounds for T(n) in each of the following recurrences. Assume that T(n) is constant for sufficiently small n. Make your bounds as tight as possible, and justify your answers.

4.Fibonacci numbers

5.Chip testing

Explanation:

  1. If the number of chips is odd, from assumption we know the number of good chips must be greater than the number of bad chips. We randomly leave one chip alone from the chips, in which good chips are not less than bad chips.
  2. Chip pairs that do not say each other is good either have one bad chip or have two bad chips, throwing them away doesn't change the fact that good chips are not less than bad chips.
  3. The remaining chip pairs are either both good chips or bad chips, after throwing one chip away in every those pairs, we have reduced the size of the problem to at most half of the original problem size.
  4. If the number of good chips is n (n>1) more than that of bad chips, we just throw away the chip we left alone when the number of chips is odd. In this case, the number of good chips is at least one more than that of bad chips, and we can eventually find a good chip as our algorithm claims.
  5. If the number of good chips is exactly one more than that of bad chips, there are 22 cases.
    • We left alone the good chip, and remaining chips are one half good and one half bad. In this case, all the chips will be thrown away eventually. And the chip left alone is the one we desire.
    • We left alone the bad chip, there are more good chips than bad chips in the remaining chips. In this case, we can recursively find a good chip in the remaining chips and the left bad chip will be thrown away at the end.

c. As the solution provided in (b), we can find one good chip in

T(n)≤T(⌈n/2⌉)+⌊n/2⌋.

By the master theorem, we have T(n)=O(n). After finding a good chip, we can identify all good chips with that good chip we just found in n−1 tests, so the total number of tests is

O(n)+n−1=Θ(n).

6.Monge arrays

 


http://chatgpt.dhexx.cn/article/gTjQrgfm.shtml

相关文章

算法导论第三版 第3章习题答案

2020/10/28:初稿&#xff0c;参考https://ita.skanev.com/&#xff0c;修订参考文献的部分错误 2020/10/30:修订第二节第4题的证明错误(参考https://blog.csdn.net/qq_36414798/article/details/81028403) 3 Growth of Functions 3.1 Asymptotic notation 1.Let f(n) g(n)…

算法导论第四版

摘要: 算法导论第四版介绍 【对算法&#xff0c;数学&#xff0c;计算机感兴趣的同学&#xff0c;欢迎关注我哈&#xff0c;阅读更多原创文章】 我的网站&#xff1a;潮汐朝夕的生活实验室 我的公众号&#xff1a;算法题刷刷 我的知乎&#xff1a;潮汐朝夕 我的github&#xff…

山东大学软件学院2021算法导论期末试题

山大软院算法期末题回忆版 可能乱序and有差错&#xff0c;仅供参考 老师会捞的 题目都很简单 不需要太复习 1.三种时间复杂度比较异同 2.T(n)T(n*3/4)nlogn 求T(n)的最大上界 3.npc问题证明 4.强连通分量 算法思想和证明 5.图三个证明 &#xff08;1&#xff09;证明最短路的…

算法导论第三版 第29章习题答案

参考文献&#xff1a; https://walkccc.me/CLRS/Chap29/29.1/https://sites.math.rutgers.edu/~ajl213/CLRS/ 29.Linear Programming 29.1 Standard and slack forms 1.If we express the linear program in (29.24)–(29.28) in the compact notation of (29.19)–(29.21)…

算法导论 观后感一

此文章只作为自己看算法导论的一些理解和想法&#xff0c;希望自己能坚持下去。 算法的在计算中的应用 什么是算法&#xff1f;算法的作用&#xff1f;为什么要研究算法&#xff1f;对于算法我常有的想法是必然和数学相关&#xff0c;而且必定是高等数学之上的。甚至很多目前…

《算法导论》常见算法总结

前言&#xff1a;本篇文章总结中用到很多其他博客内容&#xff0c;本来想附上原作链接&#xff0c;但很久了未找到&#xff0c;这里关于原创性均来源于原作者。 分治法 分治策略的思想&#xff1a; 顾名思义&#xff0c;分治是将一个原始问题分解成多个子问题&#xff0c;而…

算法导论复习题

文章目录 第一章 复杂度第二章 递归与分治2.1 排列问题2.2 整数划分问题2.3 分治时间复杂度2.5 汉诺塔时间复杂度2.6二分搜索时间复杂度2.7 金块问题2.9 棋盘覆盖复杂度2.10 合并排序时间复杂度2.11 快速排序2.11 线性时间选择 第三章 动态规划3.1 矩阵连乘问题3.2 最长公共子序…

算法导论 pdf_下载算法导论_高清_pdf

关注我,持续更新好资料 点击下方链接,获得资料 ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 这个公众号全是好资料,干货满满 算法导论pdf下载_书籍大小55M 此内容,仅限个人阅读,不得翻印,不得上传网络,不得用于谋利。 若因传播引起任何纠纷,由下载者自行负责。…

算法导论

1.算法在计算中的作用 1.1算法 算法解决哪些问题 数据结构 技术&#xff0c;算法设计分析技术 难题&#xff0c;PE完全问题 并行性 1.2作为一种技术的算法 效率 算法与其他技术 2.算法基础 2.1插入排序 代码 public static void main(String[] args) {int[] array {5, 2, 4, 6…

书评《算法导论》

最近空闲时间在看《算法导论》。由于之前有数据结构与算法的基础&#xff0c;并且也写过几百道代码题。所以现在看这本书反而有了一些更深的感悟。 《算法导论》确实不适合初学者&#xff0c;尤其是不适合实践派。对于实践派&#xff0c;《数据结构与算法分析——C语言描述》、…

蓝牙sbc怎么解决_【科普】蓝牙音频常用的编解码格式

蓝牙耳机的参数你是否都了解,那些看起来貌似高大上的技术是如何改变蓝牙音质和传输稳定性的,下面dy君就带你了解主流的几种蓝牙音频编码格式: SBC (Sub-band coding,子带编码) 最早的格式应该是SBC,SBC是A2DP(Advanced Audio DistribuTIon Profile,蓝牙音频传输协议)…

蓝牙音频双剑客(二)--高质量音频分布协议(A2DP) 连接播放音乐断开流程(被连接)介绍

零. 概述 主要介绍下蓝牙协议栈&#xff08;bluetooth stack&#xff09;传统蓝牙音频协议之高质量音频分布协议(A2DP) 连接播放音乐断开流程(被连接)介绍 一. 声明 本专栏文章我们会以连载的方式持续更新&#xff0c;本专栏计划更新内容如下&#xff1a; 第一篇:蓝牙综合介绍…

基于AVDTP信令分析蓝牙音频启动流程

前言 公司项目edifier那边需要在原来音频SBC,AAC基础上增加LHDC5.0编码&#xff0c;在打通lhdc协议栈之前&#xff0c;学习记录一番AVDTP音频服务流程。 一、AVDTP音频流基础知识 分析音频流程首先应具备的最简单基础概念知识&#xff1a;AVDTP信令signal&#xff0c;流端点se…

蓝牙音频广播多连接模块技术方案

蓝牙我们应该都很熟悉&#xff0c;现在的蓝牙应用在生活中随时随地都可以见得到&#xff0c;尤其是蓝牙音频;常见的蓝牙一般都是点对点的&#xff0c;或者就是TWS&#xff0c;一拖二功能&#xff0c;但是有一些使用场景&#xff0c;是需要一拖多的&#xff0c;需要多个音响同步…

当前市场主流蓝牙音频SOC

2020年5月9日更&#xff1a; 目前安卓已经全面支持LDAC了&#xff0c;讨论其他格式的蓝牙音频方案已经没多大意义了。 对于真无线耳机方案来说&#xff0c;也就剩高通和苹果了&#xff0c;开发者可选也就高通了。 这个市场已经归一统了~~~~~~不要看下面的内容浪费时间了。 -…

海贝思蓝牙接收器Linux,Hagibis海备思 蓝牙音频接收 耳机怎么样,评测

Hagibis海备思 蓝牙音频接收 耳机怎么样,评测&#xff1a; 1、很不错&#xff0c;与车子AUX连接电话声音很青楚&#xff0c;物有值 2、还行&#xff0c;免提打电话效果还可以&#xff0c;就是充电线和音频线一起走的那么细一根线&#xff0c;我也是醉了。声音效果一般&#xff…

蓝牙音频编码简介 - SBC、AAC、AptX、LDAC、LHDC

https://zhuanlan.zhihu.com/p/265597723 早在2000年&#xff0c;蓝牙耳机就已经出现&#xff0c;但由于技术限制&#xff0c;只能用于通话。2008年&#xff0c;随着蓝牙A2DP(Advanced Audio Distribution Profile)开始普及&#xff0c;立体声蓝牙耳机日渐流行。发展到现在&am…

蓝牙技术|伦茨科技带你了解蓝牙音频

蓝牙设备在日常生活中随处可见&#xff0c;用蓝牙耳机或音箱听音乐已经成为蓝牙最主流的应用之一。这些都用到我们的蓝牙音频技术。 蓝牙音频协议HFP&#xff0c;HSP&#xff0c;A2DP&#xff0c;AVRCP&#xff0c;OPP&#xff0c;PBAP HFP HFP(Hands-free Profile)&#xf…

蓝牙基础:蓝牙音频

前言 蓝牙耳机中存在两种 通话音频 和 音乐音频两种音频。 1 通话音频 1.1 音频链路 通话中的音频数据&#xff08;Audio&#xff09;直接通过基带上的SCO链路进行传输 音频通路(1) Audio-》Voice-》SCO/eSCO-》HCI-》Baseband(2) Audio-》Voice-》PCM-》Baseband这两种方…

ZYNQ平台Linux4.6内核蓝牙音频

第1章 RTL8723BU蓝牙模块驱动移植 1.1. 硬件方案 1.2. 蓝牙驱动移植 1.3. 蓝牙耳机规格要求 第2章 Linux音频框架 2.1. ALSA 2.2. Pulseaudio 2.3. GStreamer 2.4. Jack 2.5. FFADO 2.6. Xine 2.7. Phonon 2.8. 其他分支 第3章 蓝牙协议栈Bluez 3.1…