Performance Benchmarks
This page contains performance benchmarks comparing FFTA.jl against FFTW.jl.
Interactive Benchmark Report
Note: The interactive benchmark report below is generated automatically by the CI pipeline when benchmarks are run.
If you don't see the report, it means benchmarks haven't been run yet for this version of the documentation.
Running Benchmarks Locally
To run the benchmarks on your local machine:
cd benchmark
julia run_benchmarks.jlThis will:
- Run FFTA benchmarks in an isolated environment
- Run FFTW benchmarks in an isolated environment
- Generate an interactive HTML report at
benchmark/benchmark_report.html
For more details, see the benchmark README.
Benchmark Methodology
The benchmark suite compares FFTA.jl (a pure Julia FFT implementation) against FFTW.jl (Julia bindings to the FFTW C library).
Array Size Categories
Benchmarks are organized into categories based on array size structure:
- Odd Powers of 2: 2¹, 2³, 2⁵, ..., 2¹⁵ (2, 8, 32, 128, 512, 2048, 8192, 32768)
- Even Powers of 2: 2², 2⁴, 2⁶, ..., 2¹⁴ (4, 16, 64, 256, 1024, 4096, 16384)
- Powers of 3: 3¹, 3², 3³, ..., 3⁹ (3, 9, 27, 81, 243, 729, 2187, 6561, 19683)
- Composite: 3, 12, 60, 300, 2100, 23100 (cumulative products of 3, 4, 5, 5, 7, 11)
- Prime Numbers: 20 logarithmically-spaced primes up to 20,000
Metrics
For each array size, we measure:
- Median time: Median execution time across 100 samples
- Runtime/N: Runtime divided by array length (shows scaling efficiency)
- Mean/Min/Max time: Statistical measures of performance
Isolation
Each package is benchmarked in a completely separate Julia process to ensure:
- FFTW doesn't take precedence over FFTA when both are loaded
- Fair and accurate performance comparison
- No cross-contamination between implementations