Document Type

Presentation

Publication Date

2024

Keywords

Artificial Intelligence, Music, Audio

Disciplines

Artificial Intelligence and Robotics

Abstract

I want to make a next.js website locally and then be able to hopefully deploy on Vercel. Within the website I want to be able to use AI to separate the instruments (vocals, piano, guitar, drums, bass, etc.) and also identify which notes are being played. I was thinking that we might be able to use an AI stem splitter to separate the audio tracks and use another AI model for note detection.

Comments

Free and open access to this Campus Access Thesis is made available to the UMass Boston community by ScholarWorks at UMass Boston. Those not on campus and those without a UMass Boston campus username and password may gain access to this thesis through Interlibrary Loan. If you have a UMass Boston campus username and password and would like to download this work from off-campus, click on the “Off-Campus Users” button.

Share

COinS