Difference between revisions of "Aligning audio"

From Wiki2
(Created page with "=aligning audio= Tim McKenna <mckenna.tim@gmail.com> Thu, Sep 10, 6:03 PM (19 hours ago) to Bob I emailed them. I don't know. they may be doing what we are doing. What if w...")
 
Line 24: Line 24:
*Mini's or section leaders do the same process and create recordings of all 4 voices
*Mini's or section leaders do the same process and create recordings of all 4 voices
*We start to sing with the other voices, eventually recording ourselves again
*We start to sing with the other voices, eventually recording ourselves again
==resources==
https://zulko.github.io/moviepy/getting_started/effects.html#effects
  my_clip = VideoFileClip("some_file.mp4")
  my_clip.set_start(t=5) # does nothing, changes are lost
  my_new_clip = my_clip.set_start(t=5) # good !
===Time representations in MoviePy===
Many methods that we will see accept times as arguments. For instance clip.subclip(t_start,t_end) which cuts the clip between two times. For these methods, times can be represented either in seconds (t_start=230.54), as a couple (minutes, seconds) (t_start=(3,50.54)), as a triplet (hour, min, sec) (t_start=(0,3,50.54)) or as a string (t_start='00:03:50.54')).

Revision as of 13:26, 11 September 2020

aligning audio

Tim McKenna <mckenna.tim@gmail.com> Thu, Sep 10, 6:03 PM (19 hours ago) to Bob

I emailed them. I don't know. they may be doing what we are doing.

What if we tried a proof of concept.

  • let's pick a recording, like maybe yugen himen from yiddish NY
  • you sing it and record yourself while listening to it in a headphone
  • send me the recording of you then I sing it and record it while listening to you
  • then I/we fuck around with audacity and see if I can sych them up
  • then I/we synch up the videos based on timestamps from audacity

if that works we try stage 2:

  • we get Peri or Jenny or Linda to do the same thing for other voices
  • then I/we fuck around with audacity and see if I can synch them up
  • then I/we synch up the videos based on timestamps from audacity

If we could develop some kind of process for learning new music, I think I would stay involved. Maybe:

  • Derek introduces a new song and we learn pronunciation, rhythm, melody and our parts while muted.
  • Section leaders follow up with recordings we can have in our headphones while we sing, eventually recording ourselves.
  • We listen to each other and talk about it.
  • Mini's or section leaders do the same process and create recordings of all 4 voices
  • We start to sing with the other voices, eventually recording ourselves again

resources

https://zulko.github.io/moviepy/getting_started/effects.html#effects

 my_clip = VideoFileClip("some_file.mp4")
 my_clip.set_start(t=5) # does nothing, changes are lost
 my_new_clip = my_clip.set_start(t=5) # good !


Time representations in MoviePy

Many methods that we will see accept times as arguments. For instance clip.subclip(t_start,t_end) which cuts the clip between two times. For these methods, times can be represented either in seconds (t_start=230.54), as a couple (minutes, seconds) (t_start=(3,50.54)), as a triplet (hour, min, sec) (t_start=(0,3,50.54)) or as a string (t_start='00:03:50.54')).