Sequencers
Algorithmic composition tools including raga generators, cellular automata, and non-linear sequencing.
Unified Sequencer API
Common Interface
sequencerAll sequencers share a unified API with common base classes, sample management, and pattern-based generation.
Core Components
Abstract base class with generate(duration), export(filepath), and common properties (sample_rate, bpm).
Pattern-based sequencing with sample triggering. Supports timing modifiers (*2 double, /2 half speed).
Non-linear timing with swing and jitter for organic, humanized rhythms.
Base for generative sequencers (raga, tree, melody) with built-in sine wave synthesis.
Unified sample loading from files OR numpy arrays. Automatic resampling to target rate.
PatternSequencer Example
from audio_dsp.sequencer import PatternSequencer, SampleManager
from audio_dsp.synth import SubtractiveSynth
import numpy as np
# Create samples from synth output
synth = SubtractiveSynth()
kick = synth.synthesize(freq=60, duration=0.1)
snare = synth.synthesize(freq=200, duration=0.1)
# Create pattern sequencer
seq = PatternSequencer(bpm=120)
# Add samples - accepts numpy arrays OR file paths
seq.add_sample("kick", kick) # numpy array
seq.add_sample("snare", snare) # numpy array
seq.add_sample("hihat", "hihat.wav") # file path
# Define patterns (1=trigger, 0=rest)
patterns = {
"kick": "1000100010001000",
"snare": "0000100000001000",
"hihat": "1010101010101010"
}
# Generate audio
audio = seq.generate_from_patterns(patterns, duration=4.0)
# Export to file
seq.export("beat.wav", duration=4.0)
LiquidSequencer Example (Non-Linear Timing)
from audio_dsp.sequencer import LiquidSequencer
# Create sequencer with swing and jitter
seq = LiquidSequencer(bpm=100, swing=0.15, jitter=0.02)
# Load default drum samples
seq.sample_manager.load_directory("audio_dsp/sequencer/samples/drums")
# Add samples from manager
seq.add_sample("kick", seq.sample_manager.get("kick"))
seq.add_sample("snare", seq.sample_manager.get("snare"))
# Generate with humanized timing
patterns = {"kick": "1010", "snare": "0101"}
audio = seq.generate_from_patterns(patterns, duration=2.0)
SampleManager - Unified Sample Loading
from audio_dsp.sequencer import SampleManager, get_default_samples_dir
# Create manager with target sample rate
sm = SampleManager(sample_rate=44100)
# Load from file
sm.load("my_sample.wav", name="sample1")
# Add numpy array (e.g., from synth)
sm.add("synth_tone", synth_output_array, sr=44100)
# Load entire directory
sm.load_directory("samples/", pattern="*.wav")
# Get sample (auto-resampled to manager's rate)
audio = sm.get("sample1")
# List all loaded samples
print(sm.list_samples())
Raga Generator
Indian Raga Sequencer
sequencer.raga_generatorTime-of-day based raga selection with fractal rhythm generation and probabilistic note selection.
choose_raga()
Selects an appropriate raga based on the current time of day, following traditional Indian musical practice.
Returns: Dictionary with raga name, intervals, and mood information.
generate_fractal_rhythm()
| Parameter | Type | Default | Description |
|---|---|---|---|
| core_pattern | list | required | Base binary pattern [0, 1, ...] |
| depth | int | 3 | Fractal recursion depth |
| randomness | float | 0.2 | Probability of pattern flip (0-1) |
Returns: Expanded binary rhythm pattern
select_next_note()
| Parameter | Type | Description |
|---|---|---|
| current_note | int | Current scale degree (0-7) |
| intervals | list | Raga interval list |
Returns: Next note based on probability weights
generate_random_core_pattern()
| Parameter | Type | Default | Description |
|---|---|---|---|
| length | int | 6 | Pattern length |
Returns: Random binary pattern [0, 1, 0, 1, ...]
Ragas by Time of Day
| Time Period | Hours | Ragas |
|---|---|---|
| Early Morning | 4 AM - 7 AM | Bhairav, Ramkali |
| Late Morning | 7 AM - 10 AM | Bilawal, Jaunpuri |
| Afternoon | 10 AM - 2 PM | Sarang, Brindavani Sarang |
| Evening | 5 PM - 10 PM | Yaman, Kafi |
| Night | 10 PM - 4 AM | Malkauns, Bageshree |
RagaSequencer (Unified API)
from audio_dsp.sequencer.raga_generator import RagaSequencer
import soundfile as sf
# Create raga sequencer
seq = RagaSequencer(bpm=90, root_frequency=220.0)
# Choose raga based on time of day
raga = seq.choose_raga() # or specify hour: seq.choose_raga(hour=12)
print(f"Playing {raga['name']} with intervals {raga['intervals']}")
# Generate a raga phrase
audio = seq.generate_raga_phrase(raga, duration=30.0)
# Or use generate() for auto time-based raga selection
audio = seq.generate(duration=60.0)
# Export directly
seq.export("raga_music.wav", duration=60.0)
# Generate fractal rhythm separately
rhythm = seq.generate_fractal_rhythm(depth=4, randomness=0.15)
print(f"Rhythm pattern: {rhythm[:16]}...")
Legacy API
from audio_dsp.sequencer.raga_generator import (
choose_raga,
generate_fractal_rhythm,
generate_random_core_pattern,
select_next_note
)
from audio_dsp.synth import SubtractiveSynth
import numpy as np
import soundfile as sf
# Get raga appropriate for current time
raga = choose_raga()
print(f"Selected: {raga['name']}")
# Generate fractal rhythm pattern
core = generate_random_core_pattern(length=5)
rhythm = generate_fractal_rhythm(core, depth=4, randomness=0.15)
print(f"Pattern: {rhythm}")
# Generate melody using probabilistic note selection
synth = SubtractiveSynth()
synth.osc_wave = "sine"
synth.attack = 0.1
synth.release = 0.3
base_freq = 220 # A3
current_note = 0
melody_audio = []
for beat in rhythm:
if beat == 1:
# Play a note
freq = base_freq * (2 ** (raga['intervals'][current_note] / 12))
note = synth.synthesize(freq, 0.25)
current_note = select_next_note(current_note, raga['intervals'])
else:
# Rest
note = np.zeros(int(0.25 * 44100))
melody_audio.append(note)
sf.write("raga_melody.wav", np.concatenate(melody_audio), 44100)
Tree Composer
Rule-Based Tree Composer
sequencer.tree_composerGenerate music by traversing tree structures where nodes represent frequency/duration pairs. Supports depth-first and breadth-first traversals.
Tree Structure
The tree composer builds a hierarchical structure where:
- Root node = Base frequency and duration
- Child nodes = Frequency variations based on angle spread
- Depth = Shorter durations at deeper levels
- Traversal = Order of note playback
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| levels | int | 3 | Number of tree levels |
| n_splits | int | 3 | Branches per node |
| angle | float | 30.0 | Angle spread in degrees (affects pitch) |
| traversal | str | 'depth_first' | 'depth_first' or 'breadth_first' |
TreeSequencer (Unified API)
from audio_dsp.sequencer.tree_composer import TreeSequencer
# Create tree sequencer
seq = TreeSequencer(bpm=120, root_freq=261.63) # C4
# Generate tree composition
audio = seq.generate_tree_composition(
levels=4,
n_splits=3,
angle=30.0,
traversal='depth_first',
speed_factor=0.5
)
# Export
seq.export("tree_music.wav", duration=10.0)
# Or build and traverse manually
root, nodes = seq.build_tree(n_splits=2, levels=3)
path = seq.traverse_tree(method='breadth_first')
audio = seq.generate_from_path(path, duration=5.0)
Legacy API
from audio_dsp.sequencer.tree_composer import (
build_tree,
traverse_tree,
generate_audio_from_tree,
rule_based_tree_composer
)
# Quick generation
audio = rule_based_tree_composer(
root_freq=261.63,
levels=4,
n_splits=3,
traversal='depth_first',
speed_factor=0.5,
visualize=True # Show tree structure
)
# Or step by step
root, nodes = build_tree(root_freq=261.63, levels=3)
path = traverse_tree(root, method='depth_first')
audio, sr = generate_audio_from_tree(path)
Chord Progressions
Microtonal Chord Progressions
sequencer.stepping_chord_progressionsGenerate chord progressions using microtonal scales (24-TET quarter tones, etc.) with stepping patterns based on numeric sequences.
Available Microtonal Modes
| Mode | Steps Pattern |
|---|---|
| micro_ionian | [4, 3, 2, 4, 4, 4, 2] |
| micro_dorian | [4, 2, 4, 4, 4, 2, 4] |
| micro_phrygian | [2, 4, 4, 4, 2, 4, 4] |
| phrygian_dominant | [2, 6, 2, 4, 2, 4, 4] |
| quarter_tone | [3, 3, 3, 3, 3, 3, 3, 3] |
| micro_blues | [6, 4, 2, 2, 6, 4] |
ChordProgressionSequencer (Unified API)
from audio_dsp.sequencer.stepping_chord_progressions import ChordProgressionSequencer
# Create sequencer with 24-TET (quarter tones)
seq = ChordProgressionSequencer(
bpm=90,
root_note='A',
root_octave=3,
steps_per_octave=24 # 24-TET
)
# Set microtonal mode
seq.set_mode('micro_dorian')
# Generate stepped progression
audio = seq.generate_stepped_progression(
start_num=1,
step_sizes=[2, 3, 5], # Cycle through these steps
num_steps=16,
notes_per_chord=4 # 4-note chords
)
# Export
seq.export("microtonal_chords.wav", duration=16.0)
# Use custom scale steps
seq.set_mode(custom_steps=[4, 2, 4, 4, 2, 4, 4])
Game of Life Sequencer
Conway's Game of Life
sequencer.GOLMap cellular automaton evolution to musical sequences with EDO tuning support.
Available Modules
Standard Game of Life mapped to 12-TET chromatic scale. Grid rows = pitches, columns = time steps.
EDO (Equal Division of Octave) variant supporting microtonal scales (19-EDO, 31-EDO, etc.).
Concept
The Game of Life sequencer maps a 2D cellular automaton grid to musical events:
- Rows = Pitch (higher row = higher pitch)
- Columns = Time (left to right)
- Live cells = Notes played
- Dead cells = Silence
Example Usage
from audio_dsp.sequencer.GOL.game_of_life import game_of_life_sequencer
# Run the Game of Life sequencer
# Requires a samples/ directory with audio files
game_of_life_sequencer(
sample_dir="samples/",
output_video="gol_sequencer.mp4",
output_audio="gol_sequencer.wav",
rows=10,
cols=10,
tempo=120,
max_steps=100,
init_alive=0.2 # Initial density of live cells
)
Text-Based Sequencer
Text to Music
sequencer.text_sequencerConvert text input into musical sequences with sample clustering support.
Available Modules
Convert text strings to note sequences based on character mapping.
Use audio sample clusters to generate varied sequences.
Play back samples from defined clusters.
K-means clustering of audio samples by spectral features.
Example Usage
from audio_dsp.sequencer.text_sequencer.text_sequencer import sequencer
# Run the text-based step sequencer
# Requires numbered sample files (1_kick.wav, 2_snare.wav, etc.)
# and a pattern.txt file with binary patterns
sequencer(
pattern_file="pattern.txt",
samples_dir="samples",
output_file="sequence.wav",
bpm=120
)
# Pattern file format (pattern.txt):
# 10101010 (track 1 pattern)
# 00001000 (track 2 pattern)
# [1.0, 0.8] (volume per track)
Non-Linear Sequencer
Liquid Timing Sequencer
sequencer.non_linearGenerate sequences with non-linear "liquid" timing, swing, and jitter for organic, humanized rhythms.
Concept
Non-linear sequencing introduces controlled randomness and timing variations to create organic-feeling sequences. The "liquid" timing applies swing and jitter to create grooves that sound more human and less robotic.
Pattern Notation
| Symbol | Meaning |
|---|---|
| K | Kick drum |
| S | Snare drum |
| H | Hi-hat |
| - | Extend previous note |
| . | Rest |
| |n | Loop n times |
NonLinearDrumSequencer (Unified API)
from audio_dsp.sequencer.non_linear.non_linear_seq import NonLinearDrumSequencer
# Create sequencer with swing and jitter
seq = NonLinearDrumSequencer(bpm=120, swing=0.15, jitter=0.02)
# Load default drum samples from directory
seq.load_drum_samples("samples/")
# Or add samples manually
from audio_dsp.synth import DrumSynth
drums = DrumSynth()
seq.add_sample("kick", drums.kick())
seq.add_sample("snare", drums.snare())
seq.add_sample("hihat", drums.hihat())
# Generate with liquid timing
audio = seq.generate_liquid_drums(
pattern_str="K---S---K-S-S---|4", # Pattern with 4 loops
loop_length=2.0 # 2 seconds per loop
)
# Export
seq.export("liquid_drums.wav", duration=8.0)
Legacy API
from audio_dsp.sequencer.non_linear.non_linear_seq import (
load_samples,
parse_pattern,
generate_liquid_timing,
generate_pattern
)
# Load drum samples (kick.wav, snare.wav, hihat.wav)
samples = load_samples("samples/")
# Parse pattern string
pattern = "K.S-HK.S-H--.S.H--|4"
events, loops = parse_pattern(pattern)
# Generate non-linear "liquid" timing
liquid_times, samples_per_loop = generate_liquid_timing(
events,
bpm=160,
loop_length=4
)
# Generate audio output
output = generate_pattern(
samples, events, liquid_times, loops, samples_per_loop
)
Melody Choice
Melodic Composition
sequencer.melody_choiceMelody development using compositional techniques like mirroring, inversion, and repetition. Includes automatic counterpoint generation.
Development Techniques
Retrograde - reverse the melody
Inversion - flip around pivot frequency
Repeat a random segment of the melody
MelodySequencer (Unified API)
from audio_dsp.sequencer.melody_choice import MelodySequencer
import soundfile as sf
# Create melody sequencer
seq = MelodySequencer(bpm=100, base_freq=261.63) # C4
# Generate random melody
melody = seq.generate_random_melody(length=8)
# Develop melody to target length using compositional techniques
developed = seq.develop_melody(melody, target_bars=16)
# Generate counterpoint voice
counterpoint = seq.generate_counterpoint(developed, voice_shift=1)
# Render single melody to audio
audio = seq.render_melody(developed)
# Render multiple voices together
audio = seq.render_voices([developed, counterpoint])
# Or use generate() for automatic composition
audio = seq.generate(duration=30.0)
seq.export("melody.wav", duration=30.0)
Legacy API
from audio_dsp.sequencer.melody_choice import develop_melody, generate_counterpoint
# Base melody as (frequency, duration) tuples
base_melody = [
(261.63, 1.0), # C4
(293.66, 0.5), # D4
(329.63, 0.5), # E4
(392.00, 1.0), # G4
]
# Develop melody to 16 bars using mirror, flip, repeat
developed = develop_melody(base_melody, target_bars=16)
# Generate counterpoint voices
voice2 = generate_counterpoint(developed, voice_num=1)
voice3 = generate_counterpoint(developed, voice_num=-1)
# Each voice is a list of (frequency, duration) tuples
print(f"Main voice: {len(developed)} notes")
print(f"Counterpoint 2: {len(voice2)} notes")