Using Two Web Toolkits to Build Multimodal Piano…

Artistic black and white photo capturing a pianist's hands on piano keys, highlighting music's emotion.

Using Two Web Toolkits to Build Multimodal Piano Performance Datasets with Fingering Annotations: A Practical Guide

This guide details building a multimodal piano performance dataset with fingering annotations using two web toolkits. We’ll cover data acquisition, annotation, alignment, validation, and export, providing a reproducible workflow.

What You Will Build

A practical multimodal piano dataset incorporating audio, MIDI, score alignment, fingering annotations (per-note level), and optionally, video for hand positions. We’ll use Chopin and Beethoven datasets, plus orchestral symphonies, to compare two approaches: LBM and NBM.

Step 1: Data Licensing and Ingest Preparation

Datasets

  • Chopin piano performances (two professional datasets)
  • Beethoven piano sonatas
  • Orchestral symphonies dataset (classical and romantic periods)

Licensing

Chopin and Beethoven works are public domain. Verify licenses for orchestral performances (recordings, editions, or arrangements may have separate rights). Store license information in dataset_metadata/license.txt.

Data Formats

  • Scores: MusicXML or MIDI
  • Audio: WAV or FLAC at 44.1 kHz, 24-bit
  • Metadata: JSON per piece (composer, piece_title, tempo, time_signature)

Project Scaffolding

create a dedicated repository, schema.json (defining fields and data types), initialize DVC, and set up a data/ folder (raw/, processed/, metadata/).

Schema and Folder Sketch

dataset_metadata/schema.json
– defines fields for each piece (composer, piece_title, dataset_origin, tempo, time_signature, format, file_path, file_name, sample_rate, bit_depth, license, source_dataset).
dataset_metadata/license.txt
– consolidated licensing notes and links for all datasets.
data/
– data folder with raw and processed content; recommended subfolders: raw/ (original files), processed/ (ingested, standardized files)

This foundation ensures a transparent, auditable process from data acquisition to ingestion.

Step 2: Set Up Two Web Toolkits (Toolkit A and Toolkit B)

Two web toolkits are central: Toolkit A (fingering and note-level metadata) and Toolkit B (media capture and alignment). They share a schema (mapping.json) for data synchronization.

Toolkit A: Annotation Workspace

Provides a workspace for per-note details (batch import/export, finger assignments, timing information). Fields include: note_id, onset, offset, pitch, and fingering.

Toolkit B: Capture and Alignment Toolkit

Handles media capture, synchronization, audio/video ingestion, MIDI integration, and automatic alignment to the central dataset. API endpoints include /capture (POST) and /annotations (POST).

Schema Mapping (mapping.json)

{  "mapping": [    { "source": "toolkitA.note_id", "target": "central.note_id" },    { "source": "toolkitA.fingering", "target": "central.fingering" },    { "source": "toolkitA.onset", "target": "central.onset" },    { "source": "toolkitA.offset", "target": "central.offset" },    { "source": "toolkitA.pitch", "target": "central.pitch" }  ]}

Install toolkits (pip install toolkit-a toolkit-b), configure API tokens and URLs in config.yaml, and prioritize secure token storage.

Step 3: Acquire Recordings and Fingering Annotations

Gather diverse, high-quality performances and precise fingering data (aim for >95% onset event coverage). Use Toolkit A for initial fingerings, Toolkit B for cross-checking. Require at least two independent annotators per recording, resolving disagreements by consensus. Annotate a 5% gold-standard subset for reliability monitoring.

Step 4: Multimodal Alignment and Fingering Labeling

Align audio to MIDI/score using Dynamic Time Warping (DTW) with a 200ms maximum window. Attach fingering data to aligned notes. Target metrics: Alignment mean error ≤ 25ms, Fingering accuracy ≥ 0.92.

def merge_annotations(central, a, b):    """    central, a, b:      dict mapping note_index (int) -> annotation dict      annotation dict may include keys like 'time', 'velocity', 'finger'    Returns a new dict with merged annotations (central first, then A, then B).    """    merged = {}    all_indices = sorted(set(central) | set(a) | set(b))    for idx in all_indices:        ann = {}        if idx in central:            ann.update(central[idx])        if idx in a:            ann.update(a[idx])        if idx in b:            ann.update(b[idx])        merged[idx] = ann    return merged

Step 5: Data Validation and Quality Assurance

Implement automated checks (missing fields, invalid times, pitches), inter-annotator reliability (Cohen’s kappa > 0.8), and data integrity tests (synthetic error injection).

Step 6: Data Packaging and Export

Export data as JSONL per performance (data/jsonl/{piece_id}_{movement_id}_{performance_id}.jsonl). Create a manifest (dataset_manifest.json) listing performances, licenses, provenance, and checksums. Version data with DVC and publish tagged releases.

Step 7: Reproducible Workflow

Use a Dockerfile for a portable environment. Employ docker-compose for running local services (Toolkit A, Toolkit B, QA UI). Maintain a detailed README with run instructions, environment variables, and example runs. Include a reproducible Jupyter Notebook.

Comparison of LBM vs. NBM Across Datasets

Dataset LBM Fingering Accuracy LBM Alignment Error (ms) LBM Processing Time per 1k notes (min) LBM Data Size (notes) NBM Fingering Accuracy NBM Alignment Error (ms) NBM Processing Time per 1k notes (min) NBM Data Size (notes)
Chopin Piano dataset 0.92 28 3 28k 0.95 25 4.5 28k
Beethoven Piano Sonatas dataset 0.90 30 3.2 22k 0.93 27 4.2 22k
Orchestral Symphonies dataset 0.88 32 3.4 36k 0.91 29 4.6 36k

NBM shows improved fingering accuracy (~2-3 percentage points) at the cost of increased processing time (~20-60%).

Pros and Cons

Pros

  • Directly addresses fingering annotations
  • Step-by-step workflow
  • Concrete integration of two web toolkits
  • Reproducible workflow
  • Cross-dataset evaluation

Cons

  • Requires careful license management
  • Integration complexity between toolkits
  • Fingering annotation is labor-intensive
  • Potential for human error
  • Real-world variability can cause alignment drift

Watch the Official Trailer

Comments

Leave a Reply

Discover more from Everyday Answers

Subscribe now to keep reading and get access to the full archive.

Continue reading