Synthesizer Interface Discussion

Discussions about anything analog, digital, MIDI, synth technology, techniques, theories and more.
Post Reply
billybobjoe
Newbie
Newbie
Posts: 16
Joined: Fri Oct 14, 2011 11:13 pm
Real name: Zefan
Gear: Roland JX-3P, D-50, TR-606
Yamaha TX802, RX15, RX17
Ensoniq SQ-80
EDP Wasp, Spider
Akai S2000
Contact:

Synthesizer Interface Discussion

Post by billybobjoe » Sat May 02, 2020 2:22 am

Edit: The survey is now closed but please feel free to contribute to the discussion in this thread

Hi all,

My name is Zefan Sramek and I'm currently doing my master's degree in electrical engineering at the University of Tokyo.

We are researching synthesizer programming interfaces, specifically the problems associated with programming synths with complex synthesis methods and lots of parameters (think DX7 onward). If you look at synths before and after 1983, when the DX7 came out, you can see a stark contrast in the direction interface design took. But since then we haven't really seen any new paradigms surface. We have seen many refinements, but I think programming a synth with 100 or more parameters, one parameter at a time, is still a daunting task.

Specifically, right now we are hoping to hear from you about difficulties or frustrations you face when programming the synthesizers you use. We're hoping to use this information to develop new tools for sound design and exploration. As such your input would be very much appreciated!

For those that are interested, we will also be conducting follow up interviews to go into more detail.

The questionnaire is available here (as a Google form): Now Closed
There is further explanation on the first page of the form. It should only take you about 5 or 10 minutes.

Feel free to post any questions or thoughts in this thread as well.

Thanks so much!

P.S. @Mods: I apologize if there is a more appropriate board to post this on.
Last edited by billybobjoe on Sat May 23, 2020 4:18 am, edited 2 times in total.

User avatar
desmond
Active Member
Active Member
Posts: 745
Joined: Tue Oct 11, 2005 12:32 pm
Contact:

Re: Participate in our Synthesizer Interface Study

Post by desmond » Sat May 02, 2020 9:16 am

Well, we *have* seen new paradigms surface, partly due to better components (things like decent touch screens), and partly to the synthesizer interface design on the software side, where a lot of complexity is made available in a fixed sized window (things like tabbed panels), and more sophisticated features requiring bespoke interfaces (multi-stage loopable envelopes, resynthesizing, wavetables and so on) - and a lot of those things are reflected back into hardware synths.

There is always a tradeoff between a direct, custom programming interface (eg like the old analog synths, JP8, Prophet 5, OBX etc) where there is a 1:1 control to parameter ratio, and programming is intuitive and comfortable, but available parameters and possibilities are necessarily limited, to synths with thousands or even tens of thousands of parameters, where direct 1:1 physical control is not feasible (or even desirable!).

Most synths that are not bound too much by budget reach a design compromise, with a good range of controls to operate the most common features, then further mechanics to get in deeper (whether it's extra screens, touch screens, multi-function buttons or modes and so on) and that's about the best paradigm we have to date, for these kinds of complex synths (I'm thinking something like the Waldorf Quantum as a good example of a modern, well-designed, deep synth with a good range of controls plus a touch screen for more complex interface elements and deeper programming).

I actually don't think it's much of a problem - there are plenty of good, analog, digital, or hybrid synths with a good, well-designed tactile control surface exposing a good range of control (look at current DSi/Sequential and Moog instruments for examples).

Where it's a *real* problem imo is a synth control surface for using *plugins* effectively. Here it's a much less solved problem, there are far fewer solutions in the wild, and there's wider scope for new interface paradigms to solve the problem of bring out the power of software synthesizers to the users' fingers and control. *This* is where I'd like to see some innovation and action...

billybobjoe
Newbie
Newbie
Posts: 16
Joined: Fri Oct 14, 2011 11:13 pm
Real name: Zefan
Gear: Roland JX-3P, D-50, TR-606
Yamaha TX802, RX15, RX17
Ensoniq SQ-80
EDP Wasp, Spider
Akai S2000
Contact:

Re: Participate in our Synthesizer Interface Study

Post by billybobjoe » Sun May 03, 2020 2:08 am

I do think you're right when it comes to improvements in interfaces. Certainly the Waldorf Quantum looks like a leap forward for a wavetable interface, and the Moog One with its small screens distributed about the panel certainly also is an advance, among many other examples. So I think that in that sense we certainly have seen various new designs surface.

But I would also argue that we don't necessarily have many examples of handling something like FM synthesis well. Of course some synth architectures benefit from having the most salient parameters accessible as physical knobs, and other parameters well organized 'under the hood' so to speak, but what about architectures where you have hundreds of parameters that don't neatly fall into a hierarchy? With the DX7 for example, there certainly are a lot of parameters that you might not need immediate access to, but there is still an impractically large number of parameters that one needs to tweak when making sounds. The same with something like the D-50. Perhaps that is just a problem of organizing physical controls in a way that makes sense for the architecture?

In another sense though, I think we haven't seen much of a change since the very beginning of synth design in that pretty much every synth uses a 1-to-1 approach to programming. Of course there are some cheesy attempts like the 'brightness' fader on the JX-3P, but even today most synths use a 'adjust one parameter at a time' type interface (of some variety). I think the Reface DX and the Digitone are some examples breaking away from this though. But it strikes me that there is still an opportunity for exploring non 1-to-1 interfaces, particularly for things like FM synthesis where 1-to-1 programming can be pretty confusing. Think of having some macro parameters (perhaps even user definable) that let you control multiple parameters at a time. I think more design exploration in this space, with a keen eye for what synthesists actually would use, could be interesting.

As for your comment about controls for plugins, I think you're totally right and thanks for bringing that up. I admit it's not something I had thought much about. I guess right now more or less one is limited to a unlabelled grid of knobs or a custom controller for a certain plugin (which sort of defeats the purpose I suppose). There is the Faderfox EC4, which uses a screen to virtually label the knobs. But there is certainly room here for more designs. I will give it more thought.

Thanks for your reply.

User avatar
desmond
Active Member
Active Member
Posts: 745
Joined: Tue Oct 11, 2005 12:32 pm
Contact:

Re: Participate in our Synthesizer Interface Study

Post by desmond » Sun May 03, 2020 12:05 pm

billybobjoe wrote:
Sun May 03, 2020 2:08 am
But I would also argue that we don't necessarily have many examples of handling something like FM synthesis well. Of course some synth architectures benefit from having the most salient parameters accessible as physical knobs, and other parameters well organized 'under the hood' so to speak, but what about architectures where you have hundreds of parameters that don't neatly fall into a hierarchy? With the DX7 for example, there certainly are a lot of parameters that you might not need immediate access to, but there is still an impractically large number of parameters that one needs to tweak when making sounds.
DX-style FM is no doubt fairly parameter heavy, but in essence it's broken down into a meaningful hierachy - so overall parameters like algorithm type etc, then you have 6 operators which are identical in terms of controls - so even a hardware interface becomes doable (ie the Jellinghaus one back in the day).

Edit: I'm actually a bit surprised that someone hasn't taken all the available DX7 patches out in the wild, run machine learning algorithms on them, then developed some kind of AI macro control system that focuses on desirable sound characteristics you can morph around in...
billybobjoe wrote:
Sun May 03, 2020 2:08 am
The same with something like the D-50.
Even the D50 has a hardware controller - it's multifunction of course so it reuses controls for different purposes, but even here, there is an overall patch with parameters, plus one or two tones with their parameters (duplicated), and each tone has one or two partials with their parameters.
billybobjoe wrote:
Sun May 03, 2020 2:08 am
Perhaps that is just a problem of organizing physical controls in a way that makes sense for the architecture?
...which gets increasingly harder the more complex an instrument becomes, and the more different synthesis types it supports, which often require a different interface completely (ie, if your synth supports, like software synths can do, FM, Additive, Sampler Playback, VA and Granular methods - many of those things require a completely different layout and control mechanism). And if you start getting into semi-modular stuff (eg, something like Zebra 2), you just can't have a meaningful physical control surface without some kind of screens/virtual mapping and sharing of physical controls. It can be quite a complex problem. (Probably why so few people have made synth plugin controllers).
billybobjoe wrote:
Sun May 03, 2020 2:08 am
But it strikes me that there is still an opportunity for exploring non 1-to-1 interfaces, particularly for things like FM synthesis where 1-to-1 programming can be pretty confusing. Think of having some macro parameters (perhaps even user definable) that let you control multiple parameters at a time.
I certainly think there is opportunity for exploring the space and trying new ways of control. I'll make a couple of points here - there *have* been attempts at this to some degree. DX-style FM was always complex - when Korg were able to license Yamaha's FM technology, they had a system called "Easy-FM" they used in things like the 707 and DS8. They were 4-Op FM synths, but they abstracted this away a little with simpler, more accessible controls, and hid away some of the deeper parameters. You got less synthesis options to a degree, but a more accessible synth and interface as a result.

Of course, one of the things you lose the more complex an instrument gets, is the easy, hands on tactile nature of control. Once you learn, say, a Minimoog, you can reach up in muscle memory for the envelopes, or filter controls, have hands on, close your eyes, and control through feel and listening the sound changes. Once you start having screens, multi-function modes and non-1:1 mapping, the cognitive load from your eyes starts to overwhelm the ears, as you hunt through pages looking for particular screens or parameters.

These kind of trade offs are interesting and, from my experience when discussing this over the years, quite polarizing.

It's tempting to divide up synth users into two main camps - let's call them "players", and "audio nerds" for now. Players generally want a good way to call up the sounds they need, maybe tweak them, set up their instrument for live or songwriting use. These people aren't interested for the most part in getting under the hood, except perhaps to create certain patches required for certain needs (eg particular sounds for a synth cover band).

Audio nerds are more interested in sounds and synthesis, and the sound design part is fundamental to their music making. These types often want access to everything available, they often want more features and power that instrument can provide. They are less interested in calling up presets in general, or even playing.

It's a simplification (and there is overlap of course), but I'm not sure how many people fit neatly into those categories. For me, I tend to fall somewhere along a line between the two, but exactly where on that line will vary depending on the task at hand. If I'm songwriting, I'm caring more about music than deep sound design - I will save that until later. If I'm recording, I'll probably want to do a certain amount of tweaking to sounds, but it won't be deep, it will be largely modifying things to taste to fit in the mix. And if I'm in sound design more, making sounds for me or making a commercial preset pack, I'm probably deeper into "all parameter access" mode.

It may be useful to be able to modify the user interface of an instrument according to needs, so that the "player types" can configure the instrument to be more suitable for their needs, all the way up to audio nerds, where everything is exposed.

Now, I've spent a long time developing my own handling of softsynth plugin control, based on my typical use cases. And for me, that means quick, direct, consistent access to *the most important* synth parameters - amp and filter envelopes, filter settings, oscillator settings and so on. I don't need quick and direct access to the velocity scaling factor of the fourth rate/scale breakpoint of operator 5. I *do* need a fast way to give me tonal options.

When I've discussed synth controllers, there are plenty of people who can't bear the thought of not having everything available (I'm everything is, on the screen), but they seem to need to access *every* parameter by knob - which makes no sense to me, because in order to put thousands of parameters on a knob, you have to distribute them via pages, modes, sections etc - so it takes longer to find and put that parameter on a knob than it does to just tweak on the plugin, with it's custom designed interface for the task.

I think ultimately, I'm kind of dreaming of a synth interface (whether a real synth, or a plugin controller, or both), that's some combination of a large panel-sized touch screen surface covering the bulk of the panel, with real knobs arranged inside it in a particular useful configuration. The synth interface, panel graphics, touch-buttons and knob labels would reconfigure themselves around the physical controls for different purposes.

This would give you the benefit of screen areas where it's the most appropriate (eg, manipulating sample waveforms), macro/soft physical controls which would be correctly labelled and grouped for the task, buttons/menus and so on, with the graphics attempting to lighten the cognitive load of best accessing available parameters.

Players would get a nice large screen with patch names/lists, song lyrics, macro controls for quick adjustment - tweakers would get slightly deeper controls, maybe more, smaller screens around the controls and audio nerds would be able to navigate and use everything the instrument had - the interface would repurpose accordingly.
billybobjoe wrote:
Sun May 03, 2020 2:08 am
I think more design exploration in this space, with a keen eye for what synthesists actually would use, could be interesting.
Indeed.
billybobjoe wrote:
Sun May 03, 2020 2:08 am
As for your comment about controls for plugins, I think you're totally right and thanks for bringing that up. I admit it's not something I had thought much about. I guess right now more or less one is limited to a unlabelled grid of knobs or a custom controller for a certain plugin (which sort of defeats the purpose I suppose). There is the Faderfox EC4, which uses a screen to virtually label the knobs. But there is certainly room here for more designs. I will give it more thought.
A few people have tried, and none of them so far have come up with the device I want, or even really understood and targetted my needs. They are either devices targetted to one specific synth with a fixed set of controls, or they are generic and trying to bring every parameter under control in some way or another.

For me - if I want every parameter, at the moment the easiest way of getting to every parameter is via the plugin itself (with a variety of input methods of course) but for tactile control, I want only a subset of the most important parameters that I can use in a musical context (with the option of going deeper for those tasks that require it).

It's a complex problem, but as technology gets more powerful and cheaper, hopefully we'll start to see some designs that push the paradigm forward a bit more - and possibly, once some people see and use them, think "Of course! Why didn't we think of this before..? It's so much better than what we had..."

User avatar
meatballfulton
Moderator
Moderator
Posts: 5868
Joined: Wed Apr 13, 2005 9:29 pm
Gear: Logic Pro X

Re: Participate in our Synthesizer Interface Study

Post by meatballfulton » Sun May 03, 2020 3:12 pm

billybobjoe wrote:
Sun May 03, 2020 2:08 am
the Moog One with its small screens distributed about the panel certainly also is an advance
I'm glad you wrote this, because it's actually not anything new! The Oberheim Xpander/Matrix 12 synths had this in the late 1980s, but no mfr followed up on it. The idea resurfaced on control surfaces like the Mackie Control and Novation Sl. I think part of the reason it's alowly come back was in the 80s, those displays were $$$$$$$.

Image

Another wonderful display concept that got lost was found on the Ensoniq ESQ-1, EPS and VFX family machines, displaying multiple parameters on the screen at the same time and using buttons on the perimeter of the screen to choose which to control. Just recently, I've been glad to see the ASM Hydrasynth bring this concept back.

Image

I've always felt the UI for programming and the UI for playing don't need to be the same since they perform different functions. Think of how a computer is programmed, it's all a bunch of code in ASCII text but when the program is run the user is interacting with keys, mouse, trackpads, joysticks, touchscreens, etc. Roland's external programmers were a good step in that direction that's been abandoned for three decades now, at least in hardware. I'm sure some players took their PG units on gigs but most probably left them home, just calling up presets and using pedals, wheels and joysticks for control. Software instruments use it all the time, many have a choice of "simple" vs. "advanced" GUIs or use wrapper mechanisms like Live's Racks, Reason's Combinator, Logic's Smart Controls, NI's Komplete Kontrol, etc.
I listened to Hatfield and the North at Rainbow. They were very wonderful and they made my heart a prisoner.

billybobjoe
Newbie
Newbie
Posts: 16
Joined: Fri Oct 14, 2011 11:13 pm
Real name: Zefan
Gear: Roland JX-3P, D-50, TR-606
Yamaha TX802, RX15, RX17
Ensoniq SQ-80
EDP Wasp, Spider
Akai S2000
Contact:

Re: Participate in our Synthesizer Interface Study

Post by billybobjoe » Thu May 07, 2020 7:23 am

Thanks for these awesome responses. Sorry I'm late getting back... I had a bit of nasty food poisoning.
desmond wrote:
Sun May 03, 2020 12:05 pm
DX-style FM is no doubt fairly parameter heavy, but in essence it's broken down into a meaningful hierachy - so overall parameters like algorithm type etc, then you have 6 operators which are identical in terms of controls - so even a hardware interface becomes doable (ie the Jellinghaus one back in the day).

Edit: I'm actually a bit surprised that someone hasn't taken all the available DX7 patches out in the wild, run machine learning algorithms on them, then developed some kind of AI macro control system that focuses on desirable sound characteristics you can morph around in...
I guess my thinking here is, and you touch on this later, that for many synth users that aren't super into editing and wave physics, etc, the fact that the DX7 is organized meaningfully doesn't help when there is just such a shear number of parameters. And with the Jellinghaus, or the new version from DTronics (DT7), breaking out every parameters into a knob doesn't make the fact that there are hundreds of parameters available to deal with any easier. Perhaps the opposite. Although I think the distinction you make between 'players' and 'audio nerds' is perhaps the key here because all DX7 programming threads seem to have both people saying it's impossible to program and people saying it's really simple if you just give it a try.

But ultimately the DX7 seems to have become more of a player's instrument rather than a 'tweaker's'. But I'm wondering if there could be a useful way to make it tweakable even for the 'players'. This is why I'm thinking that something with macro parameters, as you mention, could make sense. I've actually been doing some experimenting with that idea you mention, albeit with the D-50.

This is also an interesting paper that has done some work to that end: http://dafx2019.bcu.ac.uk/papers/DAFx2019_paper_28.pdf
And their demo video:

Of course even without that type of thing you can make useful hardware interfaces for complex synth engines, but it will still be a 'audio nerd's' interface I think. We can probably imagine a PG-1000 for the DX7 so to speak.
desmond wrote:
Sun May 03, 2020 12:05 pm
It may be useful to be able to modify the user interface of an instrument according to needs, so that the "player types" can configure the instrument to be more suitable for their needs, all the way up to audio nerds, where everything is exposed.
So maybe we in fact want to have some type of hierarchical interface that can be 'opened up' so to speak, to allow for more tweaking, but also sort of hidden away. I have a JX-3P with the PG-200, and a basic example could be something like that, but you could have a few more knobs on the synth itself, say for filter cutoff or something. Maybe with a more complex synth engine it would make more sense. I've been thinking I would like a controller just to edit the envelopes on my D-50 / TX802.
desmond wrote:
Sun May 03, 2020 12:05 pm
When I've discussed synth controllers, there are plenty of people who can't bear the thought of not having everything available (I'm everything is, on the screen), but they seem to need to access *every* parameter by knob - which makes no sense to me, because in order to put thousands of parameters on a knob, you have to distribute them via pages, modes, sections etc - so it takes longer to find and put that parameter on a knob than it does to just tweak on the plugin, with it's custom designed interface for the task.
Now I'm not a plugin user by any means, but it has always struck me as odd that so many plugins use virtual knobs. I mean, I get it if it's a recreation of a HW synth, where the interface is part of the appeal and you want it to look the part, but tweaking knobs with a mouse seems like generally a bizarre waste of interface space, particularly when you have an entire computer at your disposal and all the interface paradigms we've come up with. Even when it comes to a physical interface, a knob is mostly convenient because you can make a potentiometer in that shape. It sounds like you're in the same camp more or less, but is there a particular appeal to on-screen knobs as opposed to say, an interface more akin to adjusting the colouring in Photoshop?
desmond wrote:
Sun May 03, 2020 12:05 pm
A few people have tried, and none of them so far have come up with the device I want, or even really understood and targetted my needs. They are either devices targetted to one specific synth with a fixed set of controls, or they are generic and trying to bring every parameter under control in some way or another.

For me - if I want every parameter, at the moment the easiest way of getting to every parameter is via the plugin itself (with a variety of input methods of course) but for tactile control, I want only a subset of the most important parameters that I can use in a musical context (with the option of going deeper for those tasks that require it).
I'm interesting in going a bit deeper into your thoughts on this. For instance, with a plugin controller, how important is it that you have physical tactile controls, like actual knobs? For example, would a large touch screen that could re-purpose itself to match the plugin you were using make sense? Or is the ideal world one where the custom built physical controller pops into existence the moment you go to edit the plugin and then disappears again as soon as your done? Maybe a combination of the two makes sense. I've been having some ideas about modular controllers. Not modular in the synth sense, but modular in that you could replace and reposition the physical controls to fit the particular plugin you were using it for. I wonder if that's something people would be interested in (more so than just a grid of assignable knobs). Certainly the design of a knobby interface is not just the knobs themselves (and their labels) but their layout on the panel.
desmond wrote:
Sun May 03, 2020 12:05 pm
I think ultimately, I'm kind of dreaming of a synth interface (whether a real synth, or a plugin controller, or both), that's some combination of a large panel-sized touch screen surface covering the bulk of the panel, with real knobs arranged inside it in a particular useful configuration. The synth interface, panel graphics, touch-buttons and knob labels would reconfigure themselves around the physical controls for different purposes.
I think this sounds super interesting and I'd be interested in hearing more about it if you were interested. Maybe we could set up an interview?


meatballfulton wrote:
Sun May 03, 2020 3:12 pm
I'm glad you wrote this, because it's actually not anything new! The Oberheim Xpander/Matrix 12 synths had this in the late 1980s, but no mfr followed up on it. The idea resurfaced on control surfaces like the Mackie Control and Novation Sl. I think part of the reason it's alowly come back was in the 80s, those displays were $$$$$$$.

Another wonderful display concept that got lost was found on the Ensoniq ESQ-1, EPS and VFX family machines, displaying multiple parameters on the screen at the same time and using buttons on the perimeter of the screen to choose which to control. Just recently, I've been glad to see the ASM Hydrasynth bring this concept back.
Yes, I have an SQ-80 and the programming paradigm on that with the use of the screen is definitely wonderful. Hardly different from a dense knob-per-function interface. Maybe if Yamaha and Ensoniq had teamed up the DX7 wouldn't have such a bad rep.

But I guess what I tried (and failed) to get at was that we have plenty of ways of giving users access to 1-to-1 control to parameter interfaces, but haven't seen much development in the many-to-one, and so forth.

Here's something that's similar to what I've been dreaming up: Hydramorph This let's you do patch morphing with the Hydrasynth, which seems really interesting. Apparently this software can also do macros.

I feel like this is such an under-developed area. I built a quick prototype of a patch morpher for the D-50 and it gave you a single knob that could generate hundreds of new patches just taking 2 fixed patches as an input. It certainly felt powerful messing around with that, and I wonder how far we could take tools like this.

User avatar
desmond
Active Member
Active Member
Posts: 745
Joined: Tue Oct 11, 2005 12:32 pm
Contact:

Re: Participate in our Synthesizer Interface Study

Post by desmond » Thu May 07, 2020 5:32 pm

billybobjoe wrote:
Thu May 07, 2020 7:23 am
But ultimately the DX7 seems to have become more of a player's instrument rather than a 'tweaker's'. But I'm wondering if there could be a useful way to make it tweakable even for the 'players'. This is why I'm thinking that something with macro parameters, as you mention, could make sense.
There will be certainly some "easy" options with macros - some of which have already been done, for example, a "Brightness" control that turns up or down the levels of all operators that are modulators, and envelope macros and so on, which will help.

I think the bigger problem with FM is it's "constructive" nature. With regular analog subtractive synthsis, to make an analogy, you have a big block that you chip away at, refine and arrive at your nice sound. With FM, you have a bucket of gravel, and you have to assemble from the ground up, which requires you to have some knowledge of what sound you want to achieve, and to know structurally how to achieve that.

(I'm talking about starting from scratch programming, rather than just tweaking some other sound).

Anyway - yes, I'm sure there are ways to reduce the visible complexity of an FM patch, which still allowing the user a good degree of sound shaping options in a sensible and less intimidating way.
billybobjoe wrote:
Thu May 07, 2020 7:23 am
So maybe we in fact want to have some type of hierarchical interface that can be 'opened up' so to speak, to allow for more tweaking, but also sort of hidden away.
We already do that in the software world of course, where it's easier to do, and it generally works well - the user just switches to the mode that best fits their needs at that given moment. With fixed hardware, this is of course more challenging, so the hardware would need some degree of configurability (hence, but desire to have physical knobs but graphics-generated labelling, to reconfigure your front panel for the mode you want.)
billybobjoe wrote:
Thu May 07, 2020 7:23 am
Now I'm not a plugin user by any means, but it has always struck me as odd that so many plugins use virtual knobs.
...
It sounds like you're in the same camp more or less, but is there a particular appeal to on-screen knobs as opposed to say, an interface more akin to adjusting the colouring in Photoshop?
It's a mechanic that makes sense to most people, and uses a small amount of real estate to give a larger gestural control. We do move away from knobs where there are clear wins - for example, manipulating envelope points or shape directly with the mouse, rather than moving knobs for ADSR. A knob is really just another visual for changing a value with a mouse, with a bit clearer display of the current position than just displaying a number.
billybobjoe wrote:
Thu May 07, 2020 7:23 am
I'm interesting in going a bit deeper into your thoughts on this. For instance, with a plugin controller, how important is it that you have physical tactile controls, like actual knobs? For example, would a large touch screen that could re-purpose itself to match the plugin you were using make sense?
Touch screens certainly make sense for some interface things, but I don't find them particularly great for programming synths - I have a bunch of synths on my iPad, and the "hands-on" interface doesn't particularly inspire me to program them. Even a good touch-interface is disconnected from the sound, and requires a significant cognitive load to make sure you're hitting the right part of the screen, dragging in the right direction, not going of the slider control at an angle and so on - and all the time, the visual information is cluttering the sound you're trying to control.

A real knob requires a small amount of cognitive load to reach it (although muscle memory alleviates this a lot), but once you have the control, there is *zero* visual cognitive load required. You can shut your eyes, the feel of the position gives you some indication of where you are in the control's range, and you can immediately hear the effect.

For switches, buttons and so, a touch screen is fine - you have about the same cognitive load as a physical button - ie, you just need to look where it is and aim for it, and the physical tactile feedback is less important for me here. So touchscreen buttons I'm fine with, or some touch interface to modify waveforms in real time on a screen is fine too. But for controls, physical ones are desirable - which is why I think my ideal solution would be based around a re-configurable touch panel surrounding a bunch of real controls, with the panel display giving you the context of the controls at any given time.
billybobjoe wrote:
Thu May 07, 2020 7:23 am
Or is the ideal world one where the custom built physical controller pops into existence the moment you go to edit the plugin and then disappears again as soon as your done?
Even hypothetically, I don't think we need to go that far to have a useful solution, and one that's *way* better than current solutions (either involving a generic MIDI controller, a fixed layout synth which you can use to control synth plugins, or a slightly more sophisticated generic controller with screens and some intelligence.)
billybobjoe wrote:
Thu May 07, 2020 7:23 am
I've been having some ideas about modular controllers. Not modular in the synth sense, but modular in that you could replace and reposition the physical controls to fit the particular plugin you were using it for. I wonder if that's something people would be interested in (more so than just a grid of assignable knobs). Certainly the design of a knobby interface is not just the knobs themselves (and their labels) but their layout on the panel.
So there is a guy that's posted here and elsewhere who is developing exactly this kind of system. For some people, it might be a good solution, but I'm skeptical for a number of reasons. We could go into this a bit, but for me, I'm not currently that positive about such a solution, in the various forms I can imagine it working in a practical commercial sense anyway.
billybobjoe wrote:
Thu May 07, 2020 7:23 am
I think this sounds super interesting and I'd be interested in hearing more about it if you were interested. Maybe we could set up an interview?
The thing is, I do have some strong opinions and ideas in this area - I have my use cases, and some ideas about how to handle some of the problems involved. But my ideas are still fuzzy and not clearly defined - I haven't sat down and solved the problem, and to do that, I'd actually have to design the thing myself. And I don't have the ability to actually build the product, so I'd rather do the concept and design work on other products I *can* actually make.

The bottom line is that there are opportunities in this space, and it doesn't seem like anyone is making a bold step to come up with a viable, modern solution here, using the abilities of modern technology. The problem is complex, but it's not in my opinion impossible to come up with something that is good. I basically have set up my own plugin controller system using a generic controller and some smarts, and it's already *way* better for me than anything else I've used, in specific ways. And that's with a generic controller, which obviously has weaknesses. There are ways this can be easily improved, and there are ways with a more sophisticated controller that looks at these problems and has ways of handling them better that the solution can be way way better.

Like I say, I have ideas, but I'm not in a position to dedicate the effort into designing a solution for this... although it might cross over into other ideas I'm in various stages of making.

Anyway, happy to discuss this interesting topic on the forum, but that's as far as I'll go.
billybobjoe wrote:
Thu May 07, 2020 7:23 am
But I guess what I tried (and failed) to get at was that we have plenty of ways of giving users access to 1-to-1 control to parameter interfaces, but haven't seen much development in the many-to-one, and so forth.
It's difficult to reduce control without feeling like you're restricting options. I guess it would need to be done in such a way that you get some benefits other than just making it a simpler, easier to understand system.
billybobjoe wrote:
Thu May 07, 2020 7:23 am
I feel like this is such an under-developed area. I built a quick prototype of a patch morpher for the D-50 and it gave you a single knob that could generate hundreds of new patches just taking 2 fixed patches as an input. It certainly felt powerful messing around with that, and I wonder how far we could take tools like this.
Yes, it's not new, various synths have had morphing things like this for a while - some for patch generation, and others (which I personally really like) as a performance feature, rather than a sound design one...

I think AI/machine learning is going to start changing things quite a bit, so it would be interesting to see what happens with that... I can see synth makers getting in a few experts to voice a synth, and then using that data to AI-generate a whole bunch of patch derivatives, and weigh up the best ones ,and even exposing that control to the user's own patches...

User avatar
meatballfulton
Moderator
Moderator
Posts: 5868
Joined: Wed Apr 13, 2005 9:29 pm
Gear: Logic Pro X

Re: Participate in our Synthesizer Interface Study

Post by meatballfulton » Fri May 08, 2020 12:58 pm

I first encountered skeumorphism when I tried Reason 15 years ago. It was easy to use for me since it looked like hardware I was familiar with...the mixer looked like a Mackie, the smaller FX units looked like the Boss half-rack processors, the backside had CV connections, etc.

Over the years I decided this was actually a bad thing because the look of the interface takes on too much importance. I was very happy to see Ableton made the UI for their devices in Live to be much more stylized, suggesting knobs and sliders without being so explicit, plus all devices looked the same, making function trump form.

The worst offender currently is Logic, which ironically is my DAW of choice at this time. The problem is that while many of the devices are quite usable sonically, the UIs are a total mess. This is Logic's ES-1 synthesizer, which despite the look is much like a Roland Juno in architecture.

Image

Would you rather program a Juno or this thing?

As far as the DX7:
I think the distinction you make between 'players' and 'audio nerds' is perhaps the key here because all DX7 programming threads seem to have both people saying it's impossible to program and people saying it's really simple if you just give it a try.
A friend of mine is an audio engineer who worked at Bose for many years. He owns a DX7 and said it was easy to program because "it's all just Bessel functions." That's a nerd talking. I'm an engineer myself and I remember never wrapping my head around Besssel functions!

The basic concept of two FM "operators" is simple to understand if the user just tries the two most basic cases, creating squares and saws. The DX7 seems overly complex because it has 6 operators, but each algorithm that uses multiple carriers is just layering different sounds together to create a more complex whole. This is no different than the D50 concept of using a sample for the attack portion of the sound and synthesis for the sustained portion. The algorithm with 6 carriers makes great organ sounds, because layering harmonics is how a Hammond organ works. The other basic concept of FM is that ratios between carrier and modular are consonant for integer ratios and dissonant (metallic, noisy) for non-integer ratios. Modulation depth acts much like modulating filter cutoff, the more modulation, more harmonics. Learn those three concepts and FM starts making total sense. Sadly, many FM implementations in softsynths are only 2 operator. With only 2 operators, you can't do very much. Yamaha's new Montage/MODX synthesizers now offer 8 operators and are capable of huge, warm sounds that rival analog partially because you can stack up to 8 FM sounds into a single patch...allowing unison detuning, a trick that goes as far back as the DX1 and TX816, plus morphing.

How do you really learn FM? Take a preset and deconstruct it, same as you learn subtractive synthesis. Why are synths like the SH-101 and Juno so loved? They're simple and after many iterations of that architecture Roland arrived with a synth that is easy to make sound good and hard to make sound bad.

Even the MiniMoog isn't a complex synth. It has very limited modulation options but as a playable instrument, it's far superior to it's modular ancestors and even it's competitor, the ARP Odyssey. For modern musical styles that rely on seequencing and "automation" (or "motion sequencing" as Korg called it), there is no real time playing going on so a Minimoog is at a significant disadvantage compared to an even more limited machine like a Korg Minilogue with it's multiple channels of motion sequencing.

The other aspect of DX7 playability that is often totally overlooked has nothing to do with FM. It was 16 voice polyphony coupled with a velocity and aftertouch sensitive keybed. It was arguably the first synth that could be played like a piano. Adding that to the new sounds it could create, it was bound to be successful. Notice how the DX9 with only 4 operators and no velocity or aftertouch was never as popular.
I listened to Hatfield and the North at Rainbow. They were very wonderful and they made my heart a prisoner.

billybobjoe
Newbie
Newbie
Posts: 16
Joined: Fri Oct 14, 2011 11:13 pm
Real name: Zefan
Gear: Roland JX-3P, D-50, TR-606
Yamaha TX802, RX15, RX17
Ensoniq SQ-80
EDP Wasp, Spider
Akai S2000
Contact:

Re: Synthesizer Interface Discussion

Post by billybobjoe » Sat May 23, 2020 8:22 am

Sorry for the late reply... It's been a bumpy last few weeks.
desmond wrote:
Thu May 07, 2020 5:32 pm
So there is a guy that's posted here and elsewhere who is developing exactly this kind of system. For some people, it might be a good solution, but I'm skeptical for a number of reasons. We could go into this a bit, but for me, I'm not currently that positive about such a solution, in the various forms I can imagine it working in a practical commercial sense anyway.
I'm interested your thoughts on this if you don't mind going deeper.

My thought is you would start off naively simply developing a system where the user can, say, plug in knobs or buttons or sliders to some sort of backplane in the arrangement that they want. Perhaps you want to control a Minimoog VST so you can line up the right knobs, etc in the way that matches the Minimoog layout. My first thought is then that it will be a lot of work to do this every time you change VSTs, especially if you have to remember the layout every time. So maybe we want to have some sort of 'layout memory' where the backplane can indicate where certain control elements go, perhaps with multicolour LEDs. Maybe if the connection of the elements can be made magnetic it could make it faster to move them around. I still wonder about this though since you'd need to have a bucket of knobs and whatnot next to your setup and still have the time required to change the layout. Maybe good if you go from one deep editing session to another, but not so much if you want to switch between a number of VSTs quickly.

I took a quick look for the modular controller you mentioned and found this: https://oscine.co/ He had posted on Reddit looking for feedback so I would guess it may be the same person. They are taking a completely different approach I admit I hadn't even thought of, with the control blocks you snap together. This I can see being better than just a single fixed generic controller with a grid of knobs, but maybe not too far beyond it.
desmond wrote:
Thu May 07, 2020 5:32 pm
I think AI/machine learning is going to start changing things quite a bit, so it would be interesting to see what happens with that... I can see synth makers getting in a few experts to voice a synth, and then using that data to AI-generate a whole bunch of patch derivatives, and weigh up the best ones ,and even exposing that control to the user's own patches...
I've been trying to think of some things along these lines.
meatballfulton wrote:
Fri May 08, 2020 12:58 pm
I first encountered skeumorphism when I tried Reason 15 years ago. It was easy to use for me since it looked like hardware I was familiar with...the mixer looked like a Mackie, the smaller FX units looked like the Boss half-rack processors, the backside had CV connections, etc.

Over the years I decided this was actually a bad thing because the look of the interface takes on too much importance. I was very happy to see Ableton made the UI for their devices in Live to be much more stylized, suggesting knobs and sliders without being so explicit, plus all devices looked the same, making function trump form.

The worst offender currently is Logic, which ironically is my DAW of choice at this time. The problem is that while many of the devices are quite usable sonically, the UIs are a total mess. This is Logic's ES-1 synthesizer, which despite the look is much like a Roland Juno in architecture.


This is a really interesting point. So on the one hand there's some advantages from breaking away form existing designs, in that you signal that there are new possibilities afforded to the user, but at the same time you risk alienating them by being too different. I've noticed this problem a lot with new musical interface research, where the designs researchers come up with are really interesting, but are so foreign one can't imagine them being practically adopted. I even see that in comments on posts about new controllers and such. (And yes, I'd naturally prefer the Juno, haha).
meatballfulton wrote:
Fri May 08, 2020 12:58 pm
The basic concept of two FM "operators" is simple to understand if the user just tries the two most basic cases, creating squares and saws. The DX7 seems overly complex because it has 6 operators, but each algorithm that uses multiple carriers is just layering different sounds together to create a more complex whole.
In fact I just bought a DX27 on a whim last week (since it was unchecked and super cheap) and had an interesting experience with this. It's the same architecture as the DX100 with no Pitch EG, 4 OPs, basically no bells and whistles of the DX7 and when I programmed that, it was suddenly like FM synthesis made sense.
meatballfulton wrote:
Fri May 08, 2020 12:58 pm
Learn those three concepts and FM starts making total sense.
Yes, exactly. Even using the menu interface was no big deal. Maybe I'm undermining my own research, haha.

Anyway, I also just wanted to say thank you so much for your responses, both of you. It has actually helped me enormously and got me started in a lot of directions I had previously overlooked. I'll try and update the thread if I have interesting progress to discuss. And be back to keep discussing.

Post Reply