Open Source projects


    ZynAddSubFX is an open source real-time synthesizer which produces high quality instruments. It is distributed in the most known GNU/Linux distributions and there exists a community around it. I started this project in 2002 and since few years ago other developers started to contribute.

    It has many features and it produces high quality sound, comparable to the expensive hardware. It is real-time, polyphonic and multi-timbral synthesizer powered by its three sound generation engines. The instruments are processed with multiple audio effects, like reverberation, echo, flanger, phaser. Due the "randomness" settings the instrument produced has "natural" quality. It is the first synthesizer which uses the PADsynth sound synthesis algorithm.

    It was released initially under the "GNU GPL version 2 (only)" and later the license has been changed to "GNU GPL version 2 or any later version".

    In 2005 it was presented at the international Linux Audio Conference at ZKM, Karlsruhe, Germany.

    It was also presented in 2015 (and later) at another editions of the Linux Audio Conference: A Musician's View: Exploring common features of Yoshimi and ZynAddSubFX (by Will Godfrey).



Forks and other software based on it or which use portions of it:

Other links related to ZynAddSubFX:

Paul's Extreme Sound Stretch

(a.k.a. Paulstretch)

Paulstretch  is a time-stretching of audio designed for extreme stretching. While the most time-stretching methods produces artifacts on extreme (like 10x) stretches, Paulstretch produces high quality sound even at 1000x stretches. The algorithm implemented in Paulstretch is invented by me and it is described in the Algorithms page.

The main use of this program is to transform any sound into an ambient/relaxing music. It is widely known and appreciated by musicians, sound designers and enthusiasts.



Articles, videos and other links about Paulstretch:

LDR Tonemapping

Inspired by a well known pseudo HDR technique for Photoshop, I designed a better method for making "fake HDR" effect. It converts a single image to a image where the local contrast is enhanced and in many cases the picture looks better than the original.

Example images:

This effect is highly recommended for creating time-lapse movies. Some movies with pictures processed using LDR tonemapping are available at vimeo: video 1, video 2,  video 3, video 4




Few years ago I had an idea to convert an sound to image and vice-versa. The idea is to analyse the sound as a single block and synthesize the image based on the frequencies found in the sound. This process is reversible and makes possible to add image effects (like rotation, blurring) to audio or sound effects (like echo, reverberation) to images.



I implemented this idea in the C++ program HyperMammut .

The program was rewritten from scratch in Python .

Contributions to existing open source projects