Welcome
Welcome to this course
In this course we will learn some of the most common numerical techniques and algorithms used to efficiently solve problems expressed using large matrices. We focus on detailed understanding about the performance of these methods when they are applied to large-scale systems and study topics such as convergence, accuracy and efficiency.
Programming language in this course.
This course is offered in study period 2 starting. Before the course starts, you are welcome to try Quiz: Background. If you can't access, see FAQ. Don't worry if you don't do so well on the background quiz, you can do it as many times as you like.
Teachers: Elias Jarlebring and Parikshit Upadhyaya
Why matrix computations?
The most common computational approaches to scientific problems involve, in some way, a subproblem containing a matrix. The efficiency and robustness of the computational method used for this matrix problem is often crucial for the efficiency and robustness of the entire approach. After an accurate discretization Links to an external site., a partial differential equation will be approximated by a problem involving a large matrix. Google's page-rank algorithm Links to an external site. involves the computation with a huge transition matrix representing links between web pages. In order to achieve good efficiency and robustness for these approaches, you need detailed understanding of matrix computations.
Matrix computations is important in machine learning and AI: http://www.fast.ai/2017/07/17/num-lin-alg/ Links to an external site..
Some videos introducing you to numerical linear algebra:
Gilbert Strang at MIT open courseware
Links to an external site.
a youtube seminar by David Gleich
Links to an external site.
the enthusiastic TED-talk about the beauty of matrices by Margot Gerritsen (Stanford)
Links to an external site.
What do I need to do?
The examination of the course consists of correctly completing
- Homework 1: Homework 1
- Homework 2: Homework 2
- Homework 3: Homework 3
- Homework 4 (only for SF3580):
- Exam
The course has mandatory homeworks, and each homework can give you bonus points to the exam. If you in addition complete the course training tasks you can get additional bonus. See further description on the page for Homework & bonus points rules.
How do I learn what all this?
The learning activities of the course consist of lectures, homeworks, canvas-quizzes and wiki-work. The course material can be found under modules.
Prerequisites
- A basic course in Numerical Methods, for instance SF1544 (or equivalent).
- Recommended but not strictly necessary: A continuation course in numerical analysis, for instance SF2520 applied numerical methods, which can be read in parallel.
This course is elective for all master's programmes at KTH. The course is conditionally elective in the second year of the Master's programme in applied and computational mathematics, and mandatory for the Computational mathematics (beräkningsmatematik) track. It is conditionally elective for the Master's programme in Computer simulation for science and engineering (COSSE, TDTNM) as well as the the Nordic N5TeAM Master's Programme, Applied and Engineering Mathematics (TITMM). This course is suitable for students taking the programme Computer science and engineering (CDATE) if you have taken the course SF1626 Calculus in Several Variable (flervariabelanalys).
This course is a subset of the PhD-level course SF3580 - Numerical linear algebra.
Lecture plan:
The course consists of blocks:
- Block 1: Large sparse eigenvalue problems
- Topics: Power method, Rayleigh quotient iteration, Arnoldi's method, Repeated Gram-Schmidt, Modified Gram-Schmidt, convergence characterization for Arnoldi's method for eigenvalue problems
- block1.pdf Download block1.pdf
- Block 2: Large sparse linear systems
- Topics: GMRES, CG, CGN, BiCG, convergence of Krylov methods, preconditioning
- block2.pdf Download block2.pdf
- Block 3: Dense eigenvalue algorithms (QR-method)
- Topics: QR-method, Hessenberg reduction, implicit QR-method
- block3.pdf Download block3.pdf
- Block 4: Functions of matrices
- Topics: Taylor definition, Jordan definition, Contour integral definition, Schur-Parlett method, scaling-and-squaring for the matrix exponential, methods for the matrix square root, methods for the sign function
- block4.pdf Download block4.pdf
- Block 5: (only for PhD students taking SF3580) Matrix equations
Lecture slides can be found here.
- Lectures marked as Digital will be done via zoom.
-
Lectures marked as Digital++ will be done over zoom but has a booked lecture room. We recommend you listen to the lecture in that room (bring your computer and headset). This is done in order to provide the possibility to meet other students in the course. We also plan to have one teacher present.
-
Lectures marked as IRL - In real life - will be done in lecture rooms only.
Zoom-link for digital lectures: See Welcome message.
Week |
Contents |
44 | Digital++ Lecture 1: Intro + RQ + PM: block1.pdf: §1.1 (Power method demo: video) |
|
Digital++ Lecture 2: Conv. theory PM + Inv. it + RQI + AM intro. block1.pdf: §1.1, §1.3 (matlab demo: RR) |
Lecture 2.5 (video 6 min): Rayleigh-Ritz (RR) and Bauer-Fike (BF) |
|
Digital++ Lecture 3: CGS, MGS, RGS, AG +AM. block1.pdf: §1.2, §1.3 |
|
45 |
Lecture 3.5 (video 5 min): Convergence of AM preparation |
IRL Lecture 4: Convergence AM. §1.4 NOTE! NOT DIGITAL |
|
Lecture 4.5: (video 7 min) Breakdown |
|
Digital++: Lecture 5: Linsys Intro + GMRES. block2.pdf: §2.1.1 |
|
Digital: Lecture 6: GMRES conv. disks. block2.pdf: §2.1.2 |
|
46 | Digital: Lecture 7: CG + CG convergence. block2.pdf: §2.2 |
Digital++: Lecture 8: CGNE + BiCG. block2.pdf: §2.3 |
|
Digital++:Lecture 9: Precond GMRES, intro QR-algorithm. block3.pdf: §3.1-3.2 |
|
Lecture 9.5 (video 5 min): Precond CG part 1 also in Quiz HW2 |
|
Lecture 9.8 (video 6 min): Precond CG part 2 also in Quiz HW2 | |
47 | Digital: Lecture 10: QR-method. Improvement 1: phase 1. block3.pdf: §3.2.1 |
Digital++: Lecture 11: Exercise / problem solving / homework discussion |
|
48 | IRL Lecture 12: QR-method. Improvement 2: phase 2. Improvement 2. block3.pdf: §3.2.2 |
IRL Lecture 13: QR-method shifts + conv: block3.pdf §3.3 |
|
49 | IRL Lecture 14: Matrix functions intro. block4.pdf: §4.1 |
IRL Lecture 15: Matrix exponention + Mat sqrt + Mat sign. block4.pdf: §4.2 |
|
50 | IRL Lecture 16: Krylov for matexp integrators. block4.pdf: §4.3 |
See modules for more information.