Total Pageviews

Monday 25 September 2023

3. Deep Learning Basics Linear Algebra (Types of Matrix)

 3. Types of Matrices:

1. Identity

2. Inverse Matrices

3. Diagonal

4. Symmetric

5. Orthagonal

  

Identity Matrix

An identity matrix is a square matrix of dimensions (n, n) having '1' across its main diagonal and '0' everywhere else. It is usually represented as 'In'

Example: The matrix shown below illustrates a (3, 3) Identity matrix

Python program 

import numpy as np

#Creating an Identity matrix of size 2 using np.identity() function

identity_matrix_1 = np.identity(2)

print("Identity_matrix 1\n",identity_matrix_1) 

Linear algebra offers a powerful tool called matrix inversion

To describe matrix inversion, we first need to define the concept of an identity matrix. An identity matrix is a matrix that does not change any vector when we multiply that vector by that matrix. We denote the identity matrix that preserves n-dimensional vectors as In.

The structure of the identity matrix is simple: all of the entries along the main diagonal are 1, while all of the other entries are zero.

The matrix inverse of A is denoted as A−1 , and it is defined as the matrix such that A−1A = In.

Ax=b=A-1x

Some special kinds of matrices and vectors are particularly useful.

Diagonal Matrix

A Square Matrix (D) is called a Diagonal Matrix, if D has zeros outside the main diagonal or principal diagonal. The Main diagonal or the principal diagonal are the elements on the diagonal that runs from the top left to bottom right. 



Example: The matrix 'A' shown below illustrates a (3, 3) Diagonal matrix

We have already seen one example of a diagonal matrix: the identity matrix, where all of the diagonal entries are 1. We write diag(v) to denote a square diagonal matrix whose diagonal entries are given by the entries of the vector v. Diagonal matrices are of interest in part because multiplying by a diagonal matrix is very computationally efficient. To compute diag(v)x, we only need to scale each element xi by vi. In other words, diag(v)x = v . x. Inverting a square diagonal matrix is also efficient. The inverse exists only if every diagonal entry is nonzero, and in that case, diag(v)−1 = diag([1/v1, . . . , 1/vn ]T)

Example Program in Python:

import numpy as np

# Creating a diagonal matrix with diagonal elements as (1,2,3)

diagonal_matrix = np.diag((1,2,3))

print(diagonal_matrix)

# Creating a diagonal matrix with a range of values

matrix_range= np.diag(np.arange(1,6,2))

print(matrix_range)

Symmetric

A symmetric matrix is any matrix that is equal to its own transpose: A = AT

  

Python Program 

import numpy as np

#Creating matrix A

A = np.array([[2,3,1],  [3,4,-1], [1,-1,1]])

print("A:\t" , A)

# Finding the Transpose of the matrix

transposed_matrix = A.transpose()

print("Transpose of A:\n" , transposed_matrix)

comparison = (A == transposed_matrix)

#Checking if all the elements in the matrix comparision is true

equal_arrays = comparison.all()

print(equal_arrays)

Triangular Matrix

triangular matrix can be either a lower triangular or an upper triangular matrix.

lower triangular matrix is a square matrix in which all the elements above the main diagonal are zero.

Example: L is a lower triangular matrix of dimension (3, 3)


More on Vectors :

import numpy as np

vector_1 = np.array([1,2,3])

vector_2 = np.array([1,0,3])

print("Vector 1:\n",vector_1)

print("Vector 2:\n",vector_2)

# Finding product of vector of same dimensions using inner() function 

inner_product_1 = np.inner(vector_1,vector_2)

print("Inner Product of Vector 1, Vector 2:\n",inner_product_1)

Orthogonal vectors

 If the inner product of two non-zero vectors v1 and v2 is zero, that is


then, the vectors v1 and v2 are called orthogonal vectors.

Example: Let v1 and v2 be two vectors as follows:


import numpy as np

#Creating vectors

Vector_1 = np.array([[3],[-1],[2]])

Vector_2 = np.array([[2],[4],[-1]])

print("Vector 1\n",Vector_1)

print("Vector 2\n",Vector_2)

#Finding the transpose of Vector_1

trans = np.transpose(Vector_1)

#Finding the dot product

result = np.dot(trans,Vector_2)

print("Dot Product\n",result)

4. Norm

L p

||x||p = (∑|xi|p )1/p

Norms, including the L p norm, are functions mapping vectors to non-negative values. On an intuitive level, the norm of a vector x measures the distance from the origin to the point x.

More rigorously, a norm is any function f that satisfies the following properties:

• f (x) = 0 x = 0

• f (x + y) ≤ f(x) + f (y) (the triangle inequality)

α R, f (αx) = |α|f(x)

 

The L 2 norm, with p = 2, is known as the Euclidean norm. It is simply the Euclidean distance from the origin to the point identified by x. The L 2 norm is used so frequently in machine learning that it is often denoted simply as ||x||, with the subscript 2 omitted. It is also common to measure the size of a vector using the squared L 2 norm, which can be calculated simply as xTx.

 

The L1 norm may be simplified to ||x||1 = (∑|xi| )



________

PPTs

1. Introduction

2. Linear Algebra



Material in PDF

1. Introduction & Linear Algebra.


 



______________________________________________________________________________

Home                                                                                                                        Previous





Internships: 



Placements:
1. Merkle Sokrati - Associate Business Analyst (2020 - 2024)





No comments:

Post a Comment