Welcome
Welcome to this course
In this course we will learn some of the most common numerical techniques and algorithms used to efficiently solve problems expressed using large matrices. We focus on detailed understanding about the performance of these methods when they are applied to large-scale systems and study topics such as convergence, accuracy and efficiency.
Programming language in this course.
- SF2524: Matlab Links to an external site. or Julia Links to an external site.
- SF3580 (PhD students): Julia Links to an external site.
This course is offered in study period 2 starting October 2019. Before the course starts, you are welcome to try Quiz: Background. If you can't access, see FAQ. Don't worry if you don't do so well on the background quiz, you can do it as many times as you like.
Why matrix computations?
The most common computational approaches to scientific problems involve, in some way, a subproblem containing a matrix. The efficiency and robustness of the computational method used for this matrix problem is often crucial for the efficiency and robustness of the entire approach. After an accurate discretization Links to an external site., a partial differential equation will be approximated by a problem involving a large matrix. Google's page-rank algorithm Links to an external site. involves the computation with a huge transition matrix representing links between web pages. In order to achieve good efficiency and robustness for these approaches, you need detailed understanding of matrix computations.
Matrix computations is important in machine learning and AI: http://www.fast.ai/2017/07/17/num-lin-alg/ Links to an external site..
In other words: Accept reality that the matrix Links to an external site. is everywhere. Take the red pill and listen to
a youtube seminar by David Gleich
Links to an external site.
or
the enthusiastic TED-talk about the beauty of matrices by Margot Gerritsen (Stanford)
Links to an external site.
What do I need to do?
The examination of the course consists of correctly completing
- Homework 1
- Homework 2
- Homework 3
- Exam
Bonus points
We have two types of bonus points in this course:
- Regular bonus points. Give you extra points on the exam.
- Wiki bonus points. Helps with higher grades for exam. If you accumulate y wiki-points, the interval for grade A and B will be reduced by y points.
Each homework can give you 2 regular bonus points to the exam if you
- Hand in correct solutions to the homework before the homework deadline and
- Pose and solve one wiki problem before the wiki deadline
Maximum obtainable: 6 regular bonus points.
Each homework can give you 1 wiki-bonus if you
- Pose and answer enough questions on the wiki before the wiki-deadline. See hw1.pdf Download hw1.pdf for details.
Maximum obtainable: 3 wiki-bonus points.
How do I learn what all this?
The learning activities of the course consist of lectures, homeworks, canvas-quizzes and wiki-work. The course material can be found under modules.
Prerequisites
- A basic course in Numerical Methods, for instance SF1544 (or equivalent).
- Recommended but not strictly necessary: A continuation course in numerical analysis, for instance SF2520 applied numerical methods, which can be read in parallel.
This course is elective for all master's programmes at KTH. The course is conditionally elective in the second year of the Master's programme in applied and computational mathematics, and mandatory for the Computational mathematics (beräkningsmatematik) track. It is conditionally elective for the Master's programme in Computer simulation for science and engineering (COSSE, TDTNM) as well as the the Nordic N5TeAM Master's Programme, Applied and Engineering Mathematics (TITMM). This course is suitable for students taking the programme Computer science and engineering (CDATE) if you have taken the course SF1626 Calculus in Several Variable (flervariabelanalys).
This course is a subset of the PhD-level course SF3580 - Numerical linear algebra.
Further course information:
The course consists of blocks:
- Block 1: Large sparse eigenvalue problems
- Topics: Power method, Rayleigh quotient iteration, Arnoldi's method, Repeated Gram-Schmidt, Modified Gram-Schmidt, convergence characterization for Arnoldi's method for eigenvalue problems
- Block 2: Large sparse linear systems
- Topics: GMRES, CG, CGN, BiCG, convergence of Krylov methods, preconditioning
- Block 3: Dense eigenvalue algorithms (QR-method)
- Topics: QR-method, Hessenberg reduction, implicit QR-method
- Block 4: Functions of matrices
- Topics: Taylor definition, Jordan definition, Contour integral definition, Schur-Parlett method, scaling-and-squaring for the matrix exponential, methods for the matrix square root, methods for the sign function
- Block 5: (only for PhD students taking SF3580) Matrix equations
See modules for more information.