The Grid Corpus is a large multitalker audiovisual sentence corpus designed to support joint computational-behavioral studies in speech perception. In brief, the corpus consists of high-quality audio and video (facial) recordings of 1000 sentences spoken by each of 34 talkers (18 male, 16 female), for a total of 34000 sentences. Sentences are of the form "put red at G9 now". audio_25k.zip contains the wav format utterances at a 25 kHz sampling rate in a separate directory per talker alignments.zip provides word-level time alignments, again separated by talker s1.zip, s2.zip etc contain .jpg videos for each talker [note that due to an oversight, no video for talker t21 is available] The Grid Corpus is described in detail in the paper jasagrid.pdf included in the dataset.
IPython notebooks with demo code intended as a companion to the book "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Steven L. Brunton and J. Nathan Kutz - GitHub - dynamicslab/databook_python: IPython notebooks with demo code intended as a companion to the book "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Steven L. Brunton and J. Nathan Kutz
Deep Learning Fundamentals -- Code material and exercises - GitHub - Lightning-AI/dl-fundamentals: Deep Learning Fundamentals -- Code material and exercises
D. Galvin. (2014)cite arxiv:1406.7872Comment: Notes prepared to accompany a series of tutorial lectures given by the author at the 1st Lake Michigan Workshop on Combinatorics and Graph Theory, Western Michigan University, March 15--16 2014.