Linear Recurrence Relations in Music

Slides:



Advertisements
Similar presentations
MathCAD Data.
Advertisements

General Linear Model With correlated error terms  =  2 V ≠  2 I.
Lect.3 Modeling in The Time Domain Basil Hamed
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Lecture 9: Introduction to Matrix Inversion Gaussian Elimination Sections 2.4, 2.5, 2.6 Sections 2.2.3, 2.3.
MathCAD Data. Data in tables Tables are analogous to matrix The numbers of columns and rows can be dynamically changed (in contrast to matrix) To enter.
Statistics: Data Analysis and Presentation Fr Clinic II.
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
T T18-05 Trend Adjusted Exponential Smoothing Forecast Purpose Allows the analyst to create and analyze the "Trend Adjusted Exponential Smoothing"
Ordinary least squares regression (OLS)
CSE554SimplificationSlide 1 CSE 554 Lecture 7: Simplification Fall 2014.
1 Chapter 2 Matrices Matrices provide an orderly way of arranging values or functions to enhance the analysis of systems in a systematic manner. Their.
1 Calculating Polynomials We will use a generic polynomial form of: where the coefficient values are known constants The value of x will be the input and.
1. 2 Overview of the Previous Lecture Gap-QS[O(n), ,2|  | -1 ] Gap-QS[O(1), ,2|  | -1 ] QS[O(1),  ] Solvability[O(1),  ] 3-SAT This will imply a.
Linear Recurrence Relations in Music By Chris Hall.
Modern Navigation Thomas Herring
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
CSE554SimplificationSlide 1 CSE 554 Lecture 7: Simplification Fall 2013.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
A n = c 1 a n-1 + c2an-2 + … + c d a n-d d= degree and t= the number of training data (notes) The assumption is that the notes in the piece are generated.
AGC DSP AGC DSP Professor A G Constantinides©1 Signal Spaces The purpose of this part of the course is to introduce the basic concepts behind generalised.
ADALINE (ADAptive LInear NEuron) Network and
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
1  Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
President UniversityErwin SitompulSMI 10/1 Lecture 10 System Modeling and Identification Dr.-Ing. Erwin Sitompul President University
7.3 Linear Systems of Equations. Gauss Elimination
Linear Equations in Linear Algebra
Chapter 4 Systems of Linear Equations; Matrices
MAT 322: LINEAR ALGEBRA.
GCSE/IGCSE-FM Functions
Transfer Functions Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: The following terminology.
The Maximum Likelihood Method
Physics 114: Lecture 13 Probability Tests & Linear Fitting
Linear Algebra Review.
Chapter 7. Classification and Prediction
Computation of the solutions of nonlinear polynomial systems
Chapter 7 Matrix Mathematics
3.4 Zeros of Polynomial Functions
Copyright © Cengage Learning. All rights reserved.
Gauss-Siedel Method.
We will be looking for a solution to the system of linear differential equations with constant coefficients.
The Maximum Likelihood Method
Lecture 2 Introduction to Programming
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
The Maximum Likelihood Method
Fitting Curve Models to Edges
Chapter 2 Functions.
Autar Kaw Benjamin Rigsby
Hidden Markov Models Part 2: Algorithms
REGRESSION.
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Linear Predictive Coding Methods
Chi Square Distribution (c2) and Least Squares Fitting
Chapter 4 Systems of Linear Equations; Matrices
6.5 Taylor Series Linearization
Matrices and Matrix Operations
5.2 Least-Squares Fit to a Straight Line
5.4 General Linear Least-Squares
From now on: Combinatorial Circuits:
Vectors and Matrices In MATLAB a vector can be defined as row vector or as a column vector. A vector of length n can be visualized as matrix of size 1xn.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Maths for Signals and Systems Linear Algebra in Engineering Lectures 4-5, Tuesday 18th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN.
Linear Equations in Linear Algebra
Simplex method (algebraic interpretation)
Chapter 3 Modeling in the Time Domain
Approximation of Functions
Presentation transcript:

Linear Recurrence Relations in Music By Chris Hall

The Aim of My Project The goal was to take a composition by Beethoven and generate a linear recurrence relation that best represents the tonality of the music. Sonata No. 1, Op. 12 in D Major

Linear Recurrence Relations xn= a1xn-1 + a2xn-2 + … + akxn-k

Premises only the melodic line was used; no chords. This allows the counting of only a single note at a time. The range of notes used was restricted to the audible range as determined by MIDI: Z128. The value of a single note was determined by MIDI format with middle C being 60. All notes were sequentially valued based on their pitch relative to the pitch of the notes immediately above and below (i.e. if E is 20, D# is 19 and F is 21).

What was investigated The degree of the recurrence relation The range for coefficients The best algorithm for finding the relation The best-fit solution

Establishing the quality of results The standard deviation of the original set of notes was considered the maximum allowable error for the notes being generated by the relation. The calculated errors considered for comparison were the square errors of the generated notes from the original notes. The average square error of all the notes generated was then compared to the standard deviation of the original note set. Only those errors below the standard deviation were considered acceptable. The highest quality results were considered to be acceptable results with the farthest distance from the standard deviation.

The Method The first avenue taken was to consider the same number of equations as unknowns, where the unknowns were the coefficients of the recurrence relation. Like all following methods, MATLAB was used to generate notes, errors and comparisons. The MATLAB program was written to allow the user to enter in their desired degree of relation (which corresponds to the number of unknowns). For the sake of comparison, more than one degree could be entered at a time. The program then produces a series of average square errors on a graph that correspond to the number of equations used, the theory behind which will be discussed a bit later.

The First Program

Using more equations In order to vary the number of equations used, the least squares method was used. Because the rank of each matrix was full, a unique solution could be obtained. Moreover, since the function considered was the sum of the square errors, the Hessian Matrix of the function was always a multiple of the product of the matrix with its transpose.

A simple 4x2 example

Other definitive information I know that the point zero is a local minimum because the Hessian Matrix of my original function is positive definite.

The Discreet Case The next path for investigation was to analyze coefficients with discreet rather than continuous ranges. It should be noted that these ranges included but were not limited to finite fields. Since the notes themselves were elements of finite range (remember, Z128 ) the ranges for the coefficient vectors investigated were covered by the range of the notes

The Discreet Case cont’d Another MATLAB program was written that allowed the user to input a desired range of coefficients and the desired number of equations. For reasons to be discussed shortly, only degrees two and three were used. In the degree two case, the square errors were calculated in arrays for all possible combinations of the range of coefficients and then stored in a square matrix that was Range(C)xRange(C) for comparison. The output is the coefficient vector along with the least square error. A graph is also generated that compares the original note set along the total number of notes use with the generated notes. The difference in the degree three case was in the storage of the square errors. In this case, three dimensional storage was required.

The Discreet Program

Output for Range 3 with 10 Equations

Output for Range 3 with 10 Equations

Testing With an Actual Recurrence Relation An actual recurrence relation was written to test the program. When the chosen range covered the range of the original coefficients, the program generated the sequence exactly. Otherwise, it produced a best approximation.

Test with [1,4,6] as the Coefficient Vector and a Range of 5 with 7 Equations

Test with [1,4,6] as the Coefficient Vector and a Range of 7 with 7 Equations

Test with [1,4,6] as the Coefficient Vector and a Range of 7 with 7 Equations

Future Work For more discreet analysis, writing a MATLAB program that allows for the continual expansion of the degree should be done. Applying the same methods to other, perhaps less complex, pieces of music (portions of Pachelbel’s Cannon in D Major, for instance). For musical analysis, writing a MATLAB program that would export the generated notes, replacing the originals in the original MIDI file. We might not be producing Beethoven, but it might still be musically interesting.

References Steven J. Leon, Linear Algebra With Applications. 7 ed. pp.51, 382-383,234-244 (2006). C.W. Groetsch and J. Thomas King, Matrix Methods & Applications. pp.115-118, 283 (1988). Carla D. Martin, PhD, James Madison University. Ken Schutte, Massachusetts Institute of Technology Mei Chen, PhD, The Military College of South Carolina