Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition helps you navigate your textbook with ease, offering answers to every question.

Lucas Taylor
Contributor
4.4
45
5 months ago
Preview (16 of 56 Pages)
100%
Purchase to unlock

Page 1

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 1 preview image

Loading page image...

solutions MAnuAlFoRnumerical AlgorithmsJustinsolomonby9781482251944_SM_Cover.indd128/09/1512:38 pm

Page 2

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 2 preview image

Loading page image...

Page 3

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 3 preview image

Loading page image...

11.1. We compute the gradients element-by-element:f=(∂x(x2+y2), ∂y(x2+y2))= (2x,2y)g=(∂xx2+y2, ∂yx2+y2)=(x, y)x2+y2Notice thatgdoes not exist at (0,0).Away from that point, the denominator is(x, y)2, and henceghasnorm one.1.2. (a) The first columns are trivially linearly independent. Since the third column is zero, the dimension is 2.(b) The first three vectors are independent, so the dimension is 3.(c) We can remove the third vector (0,1,0) from the first two, leaving (2,0,9) and (3,0,1). These are notmultiples of each other, so the dimension is 3.(d) The first two columns are repeated, so the dimension is 3.1.3. (a) Linear: 0 + 0 = 0·0 = 0(b) Not linear: 0·f(0,0,0) = 1,butf(0·0,0·0,0·0) = 1(c) Not linear: 0·f(0,0,0) = (0,0),butf(0·0,0·0,0·0) = (1,0)(d) Linear: both expressions just scalex(e) Linear: Can be writtenf(x, y) =231000(xy)1.4. Takex, y∈ U1∩ U2andcR. In particular, fori∈ {1,2},this meansx, y∈ Ui(individually). Since theUi’s are subspaces, we knowx+y, cx∈ Uifori∈ {1,2}.Since they are in both sets individually, we knowx+y, cx∈ U1∩ U2, showing thatU1∩ U2is a subspace.SupposeU1= span{(1,0)}andU2= span{(0,1)}. Then, (1,0),(0,1)∈ U1∪ U2but (1,1) = (1,0) + (0,1)U1∪ U2,soU1∪ U2cannot be a subspace.1.5. Supposef(x) =Axa22+Bxb22. Then, at a minimum0 =f(x) =[Axa22+Bxb22]=[xAAx2xAa+a22+xBBx2xBb+b22]= 2AAx2Aa+ 2BBx2Bb.After dividing by 2 and reorganizing the sides, this shows(AA+BB)x=Aa+Bb.1.6. From elementary calculus,f, gC1(R), cR=f+g, cfC1(R).These identities showC1(R) is asubspace of the space of functions.C1(R) contains the monomialsmk(x)xk. There are infinitely many ofthese monomials, and eachmkcannot be written as a linear combination ofm0, . . . , mk1with coefficients inR, showingC1(R) has dimension.1.7. (AA)ij=ci·cjand (AA)ij=ri·rj.1.8. Notice that minimizingf(x) is equivalent to minimizingf(x)2so long asf(x)0xR. After applying thistransformation, takingB= 0 andb=0 in problem 1.5 showsAAx=Ab.1.9. As in the previous problem, we can instead minimizeAx22subject toBx22= 1.The Lagrange multiplierof the squared problem isΛ(x;λ)xAAxλ(xBBx1).Differentiating with respect toxshows0 = 2AAx2λBBx.This provides the optimality condition:AAx=λBBx.9781482251944_SM_Cover.indd728/09/1512:38 pm

Page 4

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 4 preview image

Loading page image...

2At this critical point, we can writeAx2=Ax22=xAAx=λxBBxby the optimality condition=λBx2=λby the constraint.1.10. After squaring the constraint, the Lagrange multiplier of this system isΛ(x;λ)a·xλ(x221).Differentiating with respect toxshows0 =a2λx.Plugging this into the constraint,1 =x22=a224λ2=λ=±12a2.Hence,x=a2λ=±aa2.Since we wish to maximizef(x), we should takex=a/a2,withf(x) =a2.1.11. Differentiating with respect toxshows:0 =R(x) =(xAxx22)= 2x22Ax2(xAx)xx42by the quotient rule=Ax=(xAxx22)x.Hence,xis an eigenvector ofAwith eigenvaluexAx/x22.1.12. By definition,AA1=In×n. Taking transposes of both sides shows (A1)A=In×n. Right-multiplying by(A)1shows (A1)= (A)1,as desired. Similarly, (AB)(AB)1=In×n=B(AB)1=A1=(AB)1=B1A1.1.13. For two functionsA(t), B(t) :RRn×n,the identityddt(A(t)·B(t)) =dAdtB+AdBdtfollows directly from theproduct rule in one variable. By definition of matrix inversesA1(t)·A(t) =In×nfor allt. The right side is aconstant with respect tot, so0 =ddt(A1(t)·A(t))=d(A1)dtA+A1dAdt=d(A1)dt=A1dAdt A1.1.14. By the chain rule,ddt f(x+tv) =f(x+tv)·ddt(x+tv) =v· ∇f(x+tv).Substitutingt= 0 provides the desired result.9781482251944_SM_Cover.indd828/09/1512:38 pm

Page 5

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 5 preview image

Loading page image...

31.15. (a) We expand the product:[B(BB)1B]2= [B(BB)1B]·[B(BB)1B]=B[(BB)1(BB)(BB)1]Bby associativity=B(BB)1Bby canceling out inverses.(b) Again expanding the product,(In×nA)2=In×n2A+A2=In×n2A+AsinceAis idempotent=In×nA(c) The inverse of12In×nAis 2In×n4A, since:(12In×nA)(2In×n4A) =In×n2A2A+ 4A2=In×n2A2A+ 4AsinceAis idempotent=In×n(d)Ax=λx=A2x=λAx=λ2x.ButA2=A, soA2x=Ax=λx.Thus,λx=λ2x=λ=λ2=λ∈ {0,1}.1.16. The productABis of sizen2. Obviously findingABrequires considering each element ofABat least once(if nothing else, to write the result in memory!), already requiringO(n2) time even if each element ofABiscomputed inO(1) time. The algorithms in the figure takeO(n3) time to run due to the nested loops. Hence,there is room for improvement, and indeed Strassen’s algorithm and several others achieve faster thanO(n3)asymptotic runtime, at least for largen.1.17. Define(x)≡ −lnp(x).Since ln is monotonic, any local maximum ofp(x) is also a local maximum of(x).Hence,xis a critical point of(x),implying(x) =0.LetHbe the Hessian ofatx. Then, nearxwecan approximate:lnp(x) =(x)(x) + 12 (xx)H(xx) =lnp(x) + 12 (xx)H(xx).The first derivative term of the expansion vanishes since(x) =0.Exponentiating both sides showsp(x)const.·e12(xx)(H)(xx).Hence, a reasonable Gaussian approximation ofp(x) nearxtakes Σ =H1andμ=x.2.1. Depending on the processor, fixed-point arithmetic can be faster than floating-point since it can be carriedout on the ALU with integer-type operations without the need for dealing with an exponent. Fixed-pointarithmetic also can be applicable when the scale of numbers under consideration is known ahead of time, e.g.,in financial software. Floating-point representations are more accurate, especially when values care on manyscales.2.2. (a) (answers may vary) Rounding error can come from multiplication and division to find the valuenfromthe other variables. Discretization error can come from the representations of values from the sensors.Modeling error can result from inaccuracies of the Ideal Gas Law and/or failure to account for secondaryfactors like sensor noise or pollutants. Input error can result from using an inaccurate value of the constantR.(b) From the ideal gas law, we can writen=P VRT .Then, if we measure ¯Pand ¯Trather than the ground-truth values, we can write the forward error as:¯P VR¯TP VRT=VR¯P¯TPT9781482251944_SM_Cover.indd928/09/1512:38 pm

Page 6

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 6 preview image

Loading page image...

4=VRP+δPT+δTPTfor|δP| ≤εP,|δT| ≤εT=VR(P+δP)TP(T+δT)T(T+δT)=VRTP T+δPTP TδTPT+δT=nδPTP δTP(T+δT)Hence, the relative forward error can be bounded as follows:δPTP δTP(T+δT)T εP+P εTP(TεT)(c) In this case,n=P VRT=(100 Pa)(0.5 m3)(8.31 J·mol1·K1)(300 K) = 0.0201 molWith the given measurement bounds, the largest possible value is(101 Pa)(0.5 m3)(8.31 J·mol1·K1)(299.5 K) = 0.0203 mol =n+ 0.000234 mol =n+ 1.17%.The smallest possible value is(99 Pa)(0.5 m3)(8.31 J·mol1·K1)(300.5 K) = 0.0198 mol =n0.000234 mol =n1.17%.Hence, the absolute error is bounded by 0.0198 mol and the relative error is bounded by 1.17%.(d) At the range indicated by the problem, it is relatively well-conditioned. When the scale ofεTis commen-surate with that ofT, the problem becomes ill-conditioned.2.3. We can understand the relative error as the fractionκrel=|y|/|y||x|/|x|=xyyx,wherey+ ∆y=f(x+ ∆x) andy=f(x).By Taylor’s theorem,f(x+ ∆x) =y+f(x)∆x+O(∆x2).Hence,y=f(x)∆x+O(∆x2),so for small ∆x,κrelx·f(x)∆xf(x)·x=xf(x)f(x).The absolute condition number of this problem is:yx≈ |f(x)|.The functionf(x) = lnxhas a large relative condition number nearx= 1, sinceκrel=1/lnx, which blows upnearx= 1. Contrastingly, the functionf(x) =xhas relative condition number 1 for allx.2.4. Since minima are roots off, we can use the conditioning for root-finding, but with an extra derivative:(a)|xestx|(b)|f(xest)f(x)| ≈δx|f′′(x)|(c)1/|f′′(x)|2.5. (a) The range is (−∞,0] since limt0logt=−∞and log 1 = 0.(b) If thexkis very negative, thenexkis exponentially close to zero. This near-zero value may not be repre-sentable, and regardless a single slightly larger value will dominate the sum.9781482251944_SM_Cover.indd1028/09/1512:38 pm

Page 7

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 7 preview image

Loading page image...

5(c) We simplify directly:(x1, . . . , xn) = ln[kexk]by definition= ln[kexka+a]= ln[eakexka]= lnea+ ln[kexka]=a+ ln[kexka]Suppose we takea= minkxk.Then, rather than adding together tiny values we have moved the scale tobe arounde0= 1.(Other heuristics for choosingaare possible)2.6. There are rendering artifacts because the two surfaces overlap and hence have the same depth values; roundingduring depth computation can make one surface appear on top of the other. Possible resolutions includeslightly offsetting one surface, adding a tie-breaking rule when depths are within some tolerance of each other,or merging the geometry before rendering to avoid overlap altogether.2.7. (a) Recall that floating point arithmetic changes spacing as the order of magnitude of the value changes. Thus,it makes sense to have multiplicative error that is relative to the scale ofxandy.(b) (adapted from course notes by D. Bindel, Cornell CS) The recurrence for the ground-truth sum is simplysk=sk1+xkyk.Error terms for the addition and multiplication steps showˆsk= (ˆsk1+xkyk(1 +ε×k))(1 +ε+k).Subtracting the two shows:ˆsksk= [(ˆsk1+xkyk(1 +ε×k))(1 +ε+k)][sk1+xkyk] by the recurrences above= [ˆsk1+ε+kˆsk1+xkyk(1 +ε×k) +xkykε+k(1 +ε×k)][sk1+xkyk]= [ˆsk1sk1] +ε+kˆsk1+xkyk(ε×k+ε+k+ε+kε×k)= [ˆsk1sk1](1 +ε+k) +ε+ksk1+xkyk(ε×k+ε+k+ε+kε×k)= [ˆsk1sk1](1 +ε+k) +ε+ksk+xkyk(ε×k+ε+kε×k) sincesk=sk1+xkyk= [ˆsk1sk1](1 +ε+k) +ε+ksk+xkykε×k+xkykε+kε×kWe can expand this inductively:ˆs0s0= 0ˆs1s1= [ˆs0s0](1 +ε+1) +ε+1x1y1+x1y1ε×1+x1y1ε+1ε×1=x1y1(ε+1+ε×1) +x1y1ε+1ε×1ˆs2s2= [ˆs1s1](1 +ε+2) +ε+2(x1y1+x2y2) +x2y2ε×2+x2y2ε+2ε×2= [x1y1(ε+1+ε×1) +x1y1ε+1ε×1](1 +ε+2) +ε+2(x1y1+x2y2) +x2y2ε×2+x2y2ε+2ε×2=x1y1(ε+1+ε×1+ε+1ε+2+ε×1ε+2+ε+2) +x2y2(ε+2+ε×2) + [x1y1ε+1ε×1+x2y2ε+2ε×2] +O(ε3max)...Applying induction, this recurrence showsˆsksk=ki=1xiyiε×i+kj=iε+j+O(2max)=enmaxk|xk||yk|+O(2max),as desired.9781482251944_SM_Cover.indd1128/09/1512:38 pm

Page 8

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 8 preview image

Loading page image...

62.8. For convenience, definedxy. We’ll start by simplifying the numerator of relative error and then substitute:(1 +εx)x(1 +εy)y= (xy) + (εxxεyy)=d+εxd+ (εxεy)y=(1 +ε)((1 +εx)x(1 +εy)y) = (1 +ε)(d+εxd+ (εxεy)y)= (1 +ε)d+εx(1 +ε)d+ (1 +ε)(εxεy)y=E=(1 +ε)((1 +εx)x(1 +εy)y)(xy)xy=εd+εx(1 +ε)d+ (1 +ε)(εxεy)yd=ε+εx(1 +ε) + (1 +ε)(εxεy)ydThis can be unbounded asd0.2.9. (a) Implicitly differentiating the relationship 0 =f(x(ε)) +εp(x(ε)) with respect toεshows0 =d[f(x(ε)) +εp(x(ε))]=f(x(ε))x(ε) +p(x(ε)) +εp(x(ε))x(ε) by the chain rule.Substitutingε= 0 and usingx=x(0) shows0 =f(x)x(0) +p(x) =x(0) =p(x)f(x).(b) We differentiatef(x) =ddx(x1)·(x2)· · · · ·(x20)= (x2)· · · · ·(x20) + (x1)·(x3)· · · · ·(x20)+· · ·+ (x1)· · · · ·(x19) by the product ruleSubstitutingx=jshowsf(j) = (j1)·(j2)· · · · ·(j(j1))·(j(j+ 1))· · · · ·(j20)Forp(x) =x19,from the previous part we havex(j) =j19(j1)·(j2)· · · · ·(j(j1))·(j(j+ 1))· · · · ·(j20) =k=jjjk .(c)x(1)8.2×1018andx(20)≈ −4.3×107; hence, the rootx= 1 is far more stable.2.10. (a) The alternative formula can be obtained by scaling the numerator and denominator of the quadraticequation:b± b24ac2a=b± b24ac2a· bb24acbb24ac=b2(b24ac)2ab2ab24ac=4ac2ab2ab24ac=2cb± b24ac9781482251944_SM_Cover.indd1228/09/1512:38 pm

Page 9

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 9 preview image

Loading page image...

7(b) Whenb0, takex1=b+b24ac2a, x2=cax1,and otherwise takex2=bb24ac2a, x2=cax2.This way, there never can be cancellation because we always movebfarther from the origin in the numer-ator.2.11. The bounds are worked out below:[x] + [y] = [x+y, x+y][x][y] = [xy, xy][x]×[y] =valuesign(x)sign(x)sign(y)sign(y)[xy,xy]++++[xy, xy]+++[yx, yx]++[xy, xy]+++[min(xy, yx),max(xy, xy)]++[xy, xy]+[xy, xy]++[xy, xy]+[xy, xy][x]÷[y] = [x]×[1y ,1y][x]1/2= [x1/2, x1/2]In finite-precision arithmetic, always round down the lower bounds and round up the upper bounds.2.12. (a) Perturbing any of three collinear points slightly makes them not collinear. Furthermore, points may appearcollinear if you zoom out far enough but appear less so as you zoom in.(b)εεεεεεpq9781482251944_SM_Cover.indd1328/09/1512:38 pm

Page 10

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 10 preview image

Loading page image...

8(c)εεεεpq(d) Obvious from drawings above;ε-collinear points form the intersection of four half-planes, two of whichcome from theε-clockwise condition and two of which come from theε-counterclockwise condition.(e) No. See§3.1 of [55] for an example.3.1. No; LU may not be possible for matrices requiring pivoting.3.2. The steps of Gaussian elimination are below:(242354)(121354),with elimination matrix(1/2001)(121011),with elimination matrix(1031)(103011),with elimination matrix(1201)So,x= 3 andy=1.From the steps above, we knowU=(1201),andL=(1/2001)1(1031)1=(2001) (1031)=(2031).3.3. Computed using Gaussian elimination:L=1003106111U=1270122002043.4. Where it states “optionally insert pivoting code here,” find rowrwith largest value in columnp; then swaprowrand rowpof bothAandb.3.5. No. Full pivoting can be preferable numerically but technically does not make a difference. The only way partialpivoting would fail is if there is an all-zero column, which would indicate thatAis not invertible.3.6. WriteA=A1+A2i,b=b1+b2i, andx=x1+x2i.Then,Ax=b=(A1+A2i)(x1+x2i) =b1+b2i=(A1x1A2x2) + (A2x1+A1x2)i=b1+b2i.So, we can solve the block system(A1A2A2A1) (x1x2)=(b1b2).3.7. Carrying out Gaussian elimination is the same as pre-multiplying by the inverse of the leftmostn×nblock.Hence, the output isA1(A|In×n) = (A1A|A1) = (In×n|A1).9781482251944_SM_Cover.indd1428/09/1512:38 pm

Page 11

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 11 preview image

Loading page image...

93.8. The determinant of a lower triangular matrix is the product of its diagonal elements. IfLhas a zero diagonalelement, det(L) = 0, contradicting thatLis invertible. We used this in§3.5.3 when we assumedLcan haveones along the diagonal.3.9. Lower triangular matrices are solved using forward substitution. Forward substitution matrices are lower-triangular, so the inverse of a lower triangular matrix is lower-triangular. By Proposition 3.1, the inverse isthus lower-triangular.3.10. Noticea11=11u11.So ifa11= 0 then one of11oru11must equal zero. This trivially implies thatLorUisn’t full rank, contradictingAbeing full rank.3.11. From linear algebra, det(A) = det(L)·det(U),and determinant of a triangular matrix is the product of itsdiagonal elements.3.12. (a) The initial forward substitution step yields an upper triangular matrixU.Uis obtained by alternating be-tween pivoting (a permutation matrix) and forward substitution (a product of lower-triangular eliminationmatrices).(b)Piis the matrix of partial pivoting for rowi; partial pivoting only considers unprocessed rowsji.Similarly, each forward substitution matrix takes the form indicated because it replicates a multiple of rowito a rowk > i; see§3.3.3.(c) We simplify as follows:Pjk(In×n+cekei) =Pjk+c(Pjkek)eiby distributing the product=Pjk+cejeisincePjkpermutes rowsjandk=Pjk+cej(Pjkei)sincePjkdoes not affect rowi=Pjk+cejeiPjksince (AB)=BA=Pjk+cejeiPjksince the inverse of a row swap is the same row swap= (In×n+cejei)Pjk, as needed.(d) Follows from induction. Parts (b) and (c) together show thatPiLj=LPifor some lower-triangularLwheni > j.(e) From (a),U=Ln1Pn1· · ·L1P1A. Applying (d) showsU=LP Afor some lower-triangular matrixLandsome permutation matrixP. Hence,A=PL1U,and the result follows from the facts that the transposeof a permutation is a permutation and the inverse of a lower-triangular matrix is lower-triangular.(f) SupposeA=P LU.Then,A1b=U1L1Pb.So, applyPtob, thenL1via forward substitution,thenU1via back substitution.3.13. (a) We can work through the product in the reverse direction:(I0CA1I) (A00DCA1B) (IA1B0I)=(I0CA1I) (AB0DCA1B)=(ABCD)=M, as desired.(b) In our factorization from (a), the left-hand and right-hand matrices are lower- and upper-triangular,respectively. Hence, we only have to factor the middle matrix as a productLUto get our factorization(the product of triangular matrices is triangular). We expand:(A00DCA1B)=(L1U100L2U2), as constructed=(L100L2) (U100U2)Hence, from the factorization in (a), we can write:M=[(I0CA1I) (L100L2)]L[(U100U2) (IA1B0I)]U9781482251944_SM_Cover.indd1528/09/1512:38 pm

Page 12

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 12 preview image

Loading page image...

10(c) The recursive algorithm could proceed as follows (runtime denotedf(n) for factoring ann×nmatrix):Recursively factorA=L1U1(runtimef(n/2)).ConstructA1B=U11L11Busing forward and then backward substitution (runtimeO(n3)).ConstructRDCA1Bvia matrix multiplication/subtraction (runtimeO(n3)).FactorR=L2U2(runtimef(n/2)).Return the factorization from (b) (runtimeO(n3) for matrix multiplication and construction ofCA1).Hence, the runtime satisfiesf(n) = 2f(n/2) +O(n3), which is slower than theO(n3) method presentedin the chapter. As an aside, however, a few recursive steps using this formula may help break the LUfactorization problem up for parallel algorithmic processing.3.14. It suffices to show that forward substitution can be carried out without pivoting, each time leaving a lowerright rectangle that is diagonally dominant. This ensures that pivots are always nonzero, and subsequentlyback substitution can be carried out as usual. Suppose we writeAasA=(cvwB).After Gaussian elimination, we modify to¯A=(11cv0Bwvc).So, we must check that ¯BBwvcis diagonally dominant. We do so for columnias follows:j=i|¯bji|=j=i|bjiwjvi/c|j=i|bji|+|vi||c|j=i|wj|by the triangle inequality(|bii| − |vi|) +|vi||c|j=i|wj|by diagonal dominance ofAin columni(|bii| − |vi|) +|vi||c|(|c| − |wi|) by diagonal dominance ofAin column 1=|bii| − |vi||c| |wi|≤ |biiwivi/c|by the triangle inequality=|¯bii|, as needed.From a numerical standpoint, pivoting may help improve conditioning of arithmetic operations, as dividing byccan still cause issues.3.15. SupposeA=LU=LU, whereLandLhave ones along their diagonals. Then, (L)1L=UU1. Theleft-hand side of this equality is lower triangular and the right-hand side is upper triangular. Hence,D(L)1L=UU1is adiagonalmatrix. WritingL=LDshowsD=In×nsinceL, Lhave ones along theirdiagonals (andDscalesLcolumn-by-column). In other words,L=L, as needed.4.1. Many possible answers. One example is the matrix whose diagonal and superdiagonal both equal 1, e.g., forsize 6×6:A=110000011000001100000110000011000001.9781482251944_SM_Cover.indd1628/09/1512:38 pm

Page 13

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 13 preview image

Loading page image...

11This matrix has an upper-triangular inverse that alternates between±1:A1=1111110111110011110001110000110000014.2.Dis the lower-right submatrix ofECE; sinceECEis symmetric (by symmetry ofC),Dmust be symmetricas well. Positive-definiteness ofDfollows from positive-definiteness ofCand invertibility ofE:wD w=(0w)ECE(0w)>0.4.3. (a) The gradient offcontains the partial derivatives with respect to each ofn×mpossible perturbations,which can be organized naturally in ann×mmatrix.(b) (i)xy(ii) 2Axx(iii)2W(xAy)y(c) (i) Follows from linearity ofd/dt(ii) Trivial sinceis taken elementwise; the transpose simply re-indexes the elements ofX(iii) We work this one out more explicitly:(XY)ij=k(XikYkj)by definition of matrix multiplication=k(XikYkj+XikYkj) by the (scalar) product rule= (∂X)Y+X(∂Y) by definition of matrix multiplication(iv) SinceXX1=In×n,form the previous part we know 0 = (∂X)X1+X(∂X1).Isolating∂X1provides the desired result.4.4. Definexxsum, yysum.We knowAx+pb=y.More explicitly,a11x1+a12x2+pb1=y1a21x1+a22x2+pb2=y2Similarly,AX+bx=Cimplies:a11X11+a12X21+b1x1=C11a11X12+a12X22+b1x2=C12a21X11+a22X21+b2x1=C21a21X12+a22X22+b2x2=C22Hence, we can write:x1x200p000x1x20pX11X2100x10X12X2200x2000X11X210x100X12X220x2Ma11a12a21a22b1b2u=y1y2C11C12C21C22d.The systems forxandycomponents decouple, as might be expected.9781482251944_SM_Cover.indd1728/09/1512:38 pm

Page 14

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 14 preview image

Loading page image...

124.5. Rather than using Lagrange multipliers, we can simply assume thatt(v) is a constant inEDifvB.SupposeuB. Then,0 =∂t(u)(v,w)Et(v)t(w)22at a critical point=vN(u)(t(u)t(v)) since all other terms are zero=|N(u)|t(u)vN(u)t(v) since the first term is a constant=t(u) =1|N(u)|vN(u)t(v), as desired.4.6. (a) SupposeA=LU, and defineD=U L−.NoteLDL=L(U L−)L=LU=A, so we only need toshow thatDis diagonal. SinceLis lower-triangular,Land henceL−are both upper-triangular; thisshowsDis the product of upper-triangular matrices and thus is upper triangular. SinceAis symmetric,LDL=A=A=LDL=D=D. We have shown thatDis a symmetric upper-triangularmatrix, which implies that it is diagonal.(b) Suppose we re-defineEwithout using square roots asE(10rI(n1)×(n1)).Then,EC=(c11v0D)=ECE=(c1100D).We can apply the same pattern of symmetric pre- and post-multiplication, but now get a diagonal matrixdown the center rather than the identity. This algorithm did not need square roots and hence can workeven when the matrix is symmetric but not positive definite.4.7. We equivalently minimizex22/2rather thanx2. Then, the Lagrange multiplier system becomesΛ = 12x22+λ(bAx)=0 =xΛ =xAλ=x=Aλ.Returning to the constraint,b=Ax=AAλ=λ= (AA)1b.Substituting this into the expression aboveshows,x=Aλ=A(AA)1b.Recall that the Tikhonov system minimizesAxb22+αx22for someα >0. We can decompose anyxRnasx=Ay+x0, whereAx0=0. In this notation,x22=xx= (Ay+x0)(Ay+x0) by our definition=yAAy+yAx0+x0Ay+x0x0by expanding the square=yAAy+x0x0sinceAx0=0.Similarly,Ax22=A(Ay+x0)22by our definition=AAy22sinceAx0=0bAx=bAAyFrom our substitution and subsequent formulas, an equivalent way to carry out Tikhonov-regularized least-squares is to solve the following optimization:minx0,yαyAAy+αx0x0+AAy222bAAy9781482251944_SM_Cover.indd1828/09/1512:38 pm

Page 15

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 15 preview image

Loading page image...

13s.t.Ax0=0.The problem decouples overx0andy. The optimalx0is0. The optimality condition foryis0 =αAAy+ (AA)2yAAb=y= ((AA) +αIn×n)1b=x=Ay+x0=A((AA) +αIn×n)1b+0 from our work aboveA(AA)1basα0, as desired.The final limit is permissible becauseAAis invertible; any argument involving “(AA)1” cannot be correctbecauseAAis not necessarily invertible.4.8. The algorithm takes a forward pass and then a backward pass. The forward pass uses these formulas forincreasingi:¯wi={wi/viwheni= 1wi/(viui¯wi1)wheni >1¯bi={bi/viwheni= 1(biu2¯bi1)/(viui¯wi1)wheni >1.The backward pass becomes:xn= ¯bnxi= ¯xi¯wixi+1.4.9. The Lagrange multiplier for this problem is Λ =12xAAx+λ(cBx).Differentiating with respect toxshows0 =AAxBλ.Hence, we can solve the following linear system:(AABB0) (xλ)=(0c).4.10. (a) TakexRn=xAx=xLLx=Lx220.(b) Try to run Cholesky factorization; it is positive semidefinite only if the factorization succeeds.4.11. Yes;Rm×nis a finite-dimensional (mn-dimensional) vector space, and all norms on finite-dimensional vectorspaces are equivalent.4.12. (a)maxx=1(A+B)x‖ ≤maxx=1(Ax+Bx) by the triangle inequality for vector norms(maxx=1Ax)+(maxy=1By)since this is a looser optimization=A+B(b)Av=vA vv≤ ‖vmaxx=1Axsincev/vhas unit length=A‖‖vAB= maxx=1ABxmaxx=1(A‖‖Bx) by the last part=Amaxx=1Bx=A‖‖B9781482251944_SM_Cover.indd1928/09/1512:38 pm

Page 16

Solution Manual for Fundamentals of Electromagnetics with Engineering Applications, 1st Edition - Page 16 preview image

Loading page image...

14(c) Takevto satisfyAv=λv. Then,|λ|kv=Akv‖ ≤ ‖Ak‖‖vby a previous part. Dividing byvandtaking thek-th root provides the desired result.(d) Define the column sumcj=i|aij|.We simplify:A=maxx1=1Ax1=maxi|xi|=1ijaijxjby definitionmaxi|xi|=1ij|aij||xj|by the triangle inequality=maxi|xi|=1j|xj|cjTakekarg maxjcj, and without loss of generality since this expression includes absolute values we canassumexj0 for allj. Supposexj>0 for somej=k. Then, definey=xxjej+xjek. We havej|yj|cjj|xj|cj.This showsx=ek, and thusA‖ ≤ck.Aek=ck, so this upper bound is achieved.(e) This is a relatively standard theorem; see a matrix analysis text.4.13. (a) (i)E(i)(z)≡ ‖yz22(ii)E(ii)(z)i(zi+1zi)2(b) MinimizeE(i)(z) +αE(ii)(z); all terms are quadratic in elements ofz, so when we take the gradient wewill get a linear system(c) Sparse, positive (semi-)definite4.14. (a) (XX+αI)a=Xy(b) Decomposea=aX+awhereaXspan{x(i)}anda·x(i)= 0 for alli. Then,Xy= (XX+αI)a= (XX+αI)(aX+a)= (XX+αI)aX+αaby definition ofa=X(yXaX)αaX=αaNotice that the left-hand side is in span{x(i)}since the first term is a product withXand by definition ofaX. But, sinceα= 0, by definition ofawe must havea=0 since otherwise it would have a componentin span{x(i)}.(c) Notice thata=Xc. Thus, we can write (XX+αI)Xc=Xy. Alternatively, we can factorX(XX+αI)c=Xy. Pre-multiplying byXshows (XX)(XX+αI)c= (XX)y, and by invertibility ofXXwe know (XX+αI)c=y.(d) Take the columns ofXφto beφ(x(i)),and takecto be the solution of (XφXφ+αI)c=y; notice thatthe elements ofXφXφare dot productsφ(x(i))·φ(x(j)) =K(x(i), x(j)), which can be computed withoutdirect knowledge ofφ. Then, we can write:fφ(x) =a·φ(x)= (Xφc)·φ(x)=iciφ(x(i))·φ(x)=iciK(x(i), x)(e) The integral is a generalized inner product between functions ofsandtwhere the sum has been replacedwith an integral overxwith weightseπx2.The algorithm we developed only needs valuesK(x(i), x(j)) tocompute theci’s, and then evaluatesfφas a sum overK(x(i), x).Hence, we can carry out the algorithmwithout ever doing the inner product integral explicitly.9781482251944_SM_Cover.indd2028/09/1512:38 pm
Preview Mode

This document has 56 pages. Sign in to access the full document!

Study Now!

XY-Copilot AI
Unlimited Access
Secure Payment
Instant Access
24/7 Support
Document Chat

Related Documents

View all