View Full Version : New hardware?
vinayanne
13.05.2013, 08:37 PM
Apologies if this has been asked recently (didn't readily find a thread on it)....but does anyone have a sense for whether we can expect new Virus hardware?
I'm reticent to buy one now, given how long the current hardware has been out. From a synthesis perspective, I'm not really wanting for any more. However, I would love to see one of the following:
1) Hardware becomes purely a controller (a la Maschine)
2) Radically improved connectivity/latency
Thanks!
My estimates for dates of successive hardware releases have been poor in the past, so I dare not guess further. Access-Music's business model changed with the introduction of the TI. Prior to that they released new hardware synths regularly (once every several years), but with the TI series they gave themselves much more room to add large successive software updates rather than requiring new hardware each time, which is why TI OS is currently version 5 yet the TI hardware has only been refreshed once since the original TI|1, and even then it was a relatively minor refresh rather than a substantially different hardware upgrade.
Timeline (hardware series):-
Virus A launched 1997
Virus B series launched 1999
Virus C series launched 2002
Virus TI|1 launched 2005
Virus Snow (TI|1) 2008... present.
Virus TI|2 launched 2009... present.
Since it was four years between the launches of TI|1 and TI|2, I originally thought TI|3 might come earlier this year (Winter NAMM 2013), but it wasn't to be. TI|3 may come next year, maybe the year after, even if there is a TI|3 at all. I have no clue. Nothing has been publicly stated.
However, we know that Christoph Kemper (aka Kemper Digital), the founder and primary coder for the Virus has been working on a series of high-end profiling amplifiers for guitarists the last two years or so, the first of which was released in January 2012 last year. Several new versions debuted at the 2013 NAMM show earlier this year, and he was working on a pitch-shifter for the Musikmesse that has just been and gone. So whether he will continue to work on the guitar side of things instead of synths for a while, and if so for how long, is unapparent.
Every year around the time of the NAMM show, many of us cross our fingers hoping for a Virus-related announcement, but are disappointed year after year.
This should make a good thread for intertwining substantiated facts with pure speculation to come up with a compilation of conspiracy theories :)
I'll go ahead and start...
A Virus equivalent of Maschine would imply processing takes place on the host system (PC or Mac) rather than a dedicated hardware device, and thus the Virus would then become merely another soft-synth. This is one of those scenarios that consumers of the product would love but manufactures of the product would hate. Apple and the lessons learned from Steve Jobs before he passed have sent the message to many companies that the profit margin potential on hardware is greater than with software alone, and that more tightly integrated hardware and software, particularly closed and proprietary systems, quite frankly make more money at the end of the day. For this reason, I personally believe we are currently trending the other way, with soft synth vendors looking for ways to become hardware vendors; too much financial incentive not to.
From a sheer technical standpoint, some believe the filter processing (speed thereof?) of the Virus has not yet been matched by soft-synths. I'm not sure this is necessarily true, but if there is any truth to it, the characteristics of the filters could be a result of special processors on the DSPs used by the Virus (the Freescale 56321) which have dedicated parallel filter coprocessors (dubbed EFCOP). If there is any truth to this, it would mean that general purpose computing chips like the CPUs in most folks' computer would not be able to do filters and certain types of FX (think convolution reverb) with the same efficiency as the DSP in the Virus, so any advantage to the ears of VA on dedicated hardware would be lost. I have heard that the VST plug-in standard (still the most used) does not allow for extremely efficient parallelism...again my own personal experiences with modern VSTs and their CPU core usage would contradict this... but if true, it may add another hurdle to doing sound processing on a CPU versus a dedicated DSP.
As far as connectivity and latency improvments, this seems to be the biggest area of complaint and I believe the area that we are all hoping for the big breakthrough. The question is, why hasn't it been done to date, and why is a new product line needed to accomplish it? My UltraNova, which is similar in that the sound engine is on the synth, communicates optionally with a plug-in editor via audio over USB, has an option built-in audio interface, etc. seems to have little or no issues delivering on the expectations of USB integration, at about one fifth the cost of the Virus. Granted it is mono-timbral and on paper spec offers less voices, but for the cost, someone could buy five of the things to offset that issue.
Sometimes I wonder if the exhorbant price point of the Virus isn't part of what helps to sustain it's status. Back in the 70's, Harley Davidson was getting their lunch eaten by cheaper, higher quality Japanese motorcyles, they revamped their image by dramatically increasing their price. The name of the restaurant escapes me, but a struggling sandwich shop somewhere in Philadelphia decided to achieve notoriety by offering a cheesesteak sandwich that cost $100. In both cases, it turned out to be a brilliant marketing scam for them. I'm starting to wonder if Kemper is a similar minded evil genius.
vinayanne
14.05.2013, 01:21 AM
Great perspectives (and thanks for your insight)!
namnibor
15.05.2013, 03:53 AM
Would be great if full High Speed USB 2 or USB 3 Bandwidth were utilized and call the new beast, "VIRUS OUTBREAK" ...just my two cents. I just find it really odd that USB 3 is not even utilized in newer audio interfaces:confused: .
Berni
15.05.2013, 04:49 AM
I know there is always going to be a market for hardware but it is getting increasingly smaller as the generations that grew up with it are diminishing & the generations that came into music on software/PC's are the majority. They are the future customers & they are the one's that are going to decide whether hardware has much of a future or not. I'm guessing there going to say 'who needs it'...I'm pretty much there myself.
Vinyl sounds better than CD's, MP3's etc. but when was the last time anybody bought a new 12"? My Virus sounds better than pretty much all my VST's but the amount of software instruments I can get for the price of a single Snow makes it pretty much obsolete to the next generation.
TweakHead
15.05.2013, 07:29 PM
Hmmm... Yeah, software is coming a long way. But... Even if you are to consider the Virus (or anything similar) just a plug-in with dedicated hardware controller, that still gives you a form of control you can't easily get with other software instruments - and yeah, the level of quality that goes into such a product is way beyond most software based stuff. So I think hardware will live long, but maybe we'll be seeing more products that give us the best of both worlds more and more. Integration is something most of us welcome. We need some new standards. There's vst, audio unit, rtas. A single multi-platform thing would be better and enable programmers to focus on what's important. A lot of confusion regarding usb or firewire, and thus, connections to. It's a shame that companies are pushing for their own solutions instead of thinking about what's useful for the users. Thunderbolt is Apple's new baby, for example. Who on its right mind would stand behind it? How can one be sure it's got a future? This is the other part of the problem. And this seem to be very important issues for the Virus. Since Access has to choose something that would work for the majority of users across all platforms, which isn't an easy task. Access or any company that wishes to do the same. We've also talked about how Novation and Korg have done something similar and with success.
I bet it all comes down to the programmer being busy with other stuff and I'm pretty sure Access will blow everyone's mind in the near future. At least more so then Clavia, for example.
Nowadays I think it's cool to have a bit of everything: software, hardware VA, analogue... It's cool times to be making music: so many options and so much good stuff out there!
namnibor
15.05.2013, 09:18 PM
Tweakhead, you hit the proverbial nailS on their head in what you wrote!! The midi standard might be 'old' and perhaps could in itself use an update but the point am making it was and IS still quite a ground breaking event to get all manufacturers to come on-board with an uniformed standard.
The very reason I dropped learning Pro Tools not very far into it was AVID's insistence on being exclusive from everyone and even REWIRE has its issues if AVID decides they do not wish to play nice with say Reaper, for instance as an easy example.
I understand the history and it unfortunately ALL comes down to Consumerism and Marketing; simply MONEY! Coming back to synths after military career/college/more military, it still baffles my mind how fraking *confusing* it is today to decide on a great Audio Interface without throwing ALOT of cash down throats of RME. I know nothing admittedly, about computer code and such but SURELY stable drivers should not be that hard to produce.
Then there's the vst/au/rtas, let alone all the variables in platforms/O.S., and think these things among many other variables leading down the greed path are going to allow hardware to prevail for some time.
I do not own anything by Apple but it seems the iPad is showing some awesome innovators writing synthesis apps for it. Wolfgang Palm is whom caught my attention in his Wavetable Synth App and you do not hear of these app makers producing versions of their apps for Android devices, et al, do you?
Anyway, I think another HUGE hurdle the "all software synth world" has to jump is ceasing making so many shitty midi controllers.
I too think Access will more than likely counter with something to blow away the potential competition that DSI Prophet 12 *may* give them. It could even be a departure from the Virus architecture and something so totally new it blows our minds!
As SSD hard drives are coming down in price it almost would make sense to utilize them for the software inside the hardware. Native Instruments have done what seems to make sense.
Anyway, it IS a wonderful time to be making music and playing with sound!
One thing about Thunderbolt, it's actually Intel's baby rather than Apple's, Apple was just the first to make it the sole interface to a piece of hardware (Thunderbolt display). I guess the reason we don't see music hardware folks jumping right on Thunderbolt is that for the most part there is nothing about streaming audio that would, in theory be improved by it. Everything from USB 2 on up should be able to handle streaming audio well enough. God knows my firewire audio interface does, the Ultranova works beautifully over USB, so assuming competent developers, there should be no reason the Virus can't utilize USB properly.
Also since Thunderbolt has not had a real reason to be needed on the PC yet, not all PCs have it. So, as an interface choice, a hardware manufacturer has to look at a market where the PC still has 90-ish percent of the consumer market compared to Macs (admittedly probably different numbers if we limited that to the consumer music production population but that's tougher to measure).
I looked briefly at Apollo interfaces (opted out of that one because of the high cost of the hardware itself and apparently to really utilize them you have to really modify your workflow to do things the UAD way using their methods and plugins), but decided not to because Thunderbolt was really only supported on Mac and I couldn't stomach paying so much for an audio interface that tethers me to any platform, particularly Apple with their track record.
What would be nice is if the Virus and any hardware synth that offers integration could find it in their heart to make the interface to computer a swappable option card. That of course has a dramatic effect on the cost to develop, test and produce, which would ultimately be passed on to the consumer whether they needed the option of more than one connectivity type, but it would sure provide some peace of mind (even if the placebo effect) on purchase of a hardware investment like a $3,000 synth.
Berni
15.05.2013, 11:34 PM
Well there you all go on about connectivity but software doesn't need it in the physical sense & as for a few different plug in formats, compared to hardware it is nothing. I also believe that midi has also been updated by several people, yamaha & roland spring to mind but has never caught on.
I know a lot of people that are quite happy to produce using only there laptop & if all you are using are virtual instruments then it is quite feasible.
It is a good time to make music indeed!
TweakHead
15.05.2013, 11:51 PM
Still think there should be a new software standard, similar to what's been done with MIDI back in the day - like "namnibor" pointed out. I started thinking about this when using Reaktor. I suppose most users run it just like any other instrument, but it can be used as a stand alone music studio software. This means that any instrument for it is multi platform and is the exact same file for any operating system. In other words, this illustrates how this could be easily done if the companies in this business would come to terms about such a thing. Mind you, Reaktor isn't - as some may think - just software, it can handle midi and audio inputs and outputs pretty well. To prove my point here, just take a look at the complex setup that Tim Exile is running with it.
How does this relate to the Virus? I'm pretty sure that running such a complex synthesizer and having to code it to meet all the formats used in this industry is a waste of time and subject to constant changes - as the software or operating systems gets updated. One perfect example of this is the changes from Apple's Snow Leopard (which is the one I'm using, still today) to Lion, where developers were forced to update their products to 64bit compatible - even if the user chooses to run the software in 32 bit mode. If they were to pay attention to what pro users want, they'd never do this. With this move they've left users that rely on older software wondering why this change had to come. And this isn't just some good old plug-ins, it's also true for such things as the good old, still very expensive in second hand market, synthesizer like the modulars from Clavia. Does it make any sense? Not really, but here we see, again, that Apple and their oriental friends at Intel are trying to push the market towards new hardware.
The case is such that no proper usage of the multiple cores has been obtained in current software, despite what the marketing guys may tell us. And if they were to invest for the good of the users, such technologies we already have, like the processors on our graphic cards could be used to power some really demanding DSP. There's some companies trying to do that, alright, but it's another nightmare to jump into, since it's another platform. And of course, isn't the same for Nvidia or Ati. You can see where I'm going.
What about having some spare processor inside the computer that software companies could write stuff to? Sort like an open DSP component inside the computer, that would work like Universal Audio cards do, but not in a proprietary, closed way.
Some may argue that the Virus is also a closed system, which it is. But it's also got a very good physical interface to go with it, and you can still play it live without a computer. So being an instrument, and a very good one, sort of makes up for it - and could even be better if only the industry would come to terms about where its going and common goals were set. My 2 cents. Rant over :twisted:
Well there you all go on about connectivity but software doesn't need it in the physical sense & as for a few different plug in formats, compared to hardware it is nothing. I also believe that midi has also been updated by several people, yamaha & roland spring to mind but has never caught on.
I know a lot of people that are quite happy to produce using only there laptop & if all you are using are virtual instruments then it is quite feasible.
It is a good time to make music indeed!
Well in my case my only interest in hardware is the fact that CPU technology has hit very real thermal limitations in the past few years, and they are not doubling in speed year over year (or even remotely close) like they used to. The CPU in my primary PC was purchased about 4 years ago, and benchmarks only a few percentage points below the fastest CPUs you can buy today.
So, although admittedly is it partially mental, I do not like my polyphony or type/quality of FX I can run to constantly hit a CPU ceiling. So, dedicated hardware to take some of the processing load is the only viable option.
With that in mind however, CPU savings doesn't do me any good if the workflow around a piece of hardware is agonizingly cumbersome or the integration doesn't work very well. Then it becomes like a risk/reward balancing act where the risk is the cost of the hardware versus how much of a pain in the ass it will be. Every time I run the purchase of a new Ti2 through that equation, it comes up short and I end up foregoing the purchase (at least thus far).
Still think there should be a new software standard, similar to what's been done with MIDI back in the day - like "namnibor" pointed out.
I was a little confused about this, we already have software standards. For example VST, created by Steinberg (now owned by Yamaha) and still the most widely used. It is designed for exactly the kinds of things being discussed here (even if it is not used for all of them)
http://en.wikipedia.org/wiki/Virtual_Studio_Technology
But, even though as far as I can tell, Steinberg has never required royalty payments to utilize VST technology, anyone can adhere to the standard that wants to, but nobody is twisting their arm to do so. Perhaps that's the problem.
I could talk a lot about the reasons (from a software development company's standpoint) why a company like Apple, Propellerhead, AVID, etc would decide to be defiant and utilize their own proprietary plug-in technology. If anyone is really interested in the hows and whys it makes sense for Apple to create AU plugins, etc., then maybe I will talk a little about that in another post. Short version of the story it is in their best interest (toward the goal of gaining market share for their DAW host, which is the key to remaining competitive in the audio software market). It is not in their interest to make their platform dependent on a competitior's product (Logic being dependent on technology invented for Cubase? Not Invented Here! Must Create Own).
This might be sort of a tangential leg of this thread, and I'm not sure how exactly it relates to new hardware from Access but I just wanted to raise the point that there's no lack of a standard. What there is a lack of is motivation for every company to line up and use one of them.
There's an interesting recent article here about so many software companies getting beaten up in this market. Strong Musikmesse showings from only two companies (Steinberg-Yamaha and Cakewalk-Roland). See where that's headed? Strategic DAW position very important to makers of hardware instruments.
http://www.kvraudio.com/focus/frankfurt-musikmesse-2013---where-have-all-the-software-companies-gone-22161
TweakHead
16.05.2013, 01:22 AM
Yes, but even though VST can be considered a 'standard', the files are not exactly the same for different operating systems. I mentioned Reaktor because the same ensemble (.ens file) can be used in any operating system. This changes are handled on the platform that runs the plug-in itself, leaving the plug-ins out of that equation. Such a move would be more then welcome - as it would allow developers to focus on what's important.
I think the only product that has a good enough excuse to have its own plug-in format is Reason - since there's a clear advantage in making the plug-ins compatible with the rest of the rack. Apple clearly has some very aggressive way of making their users pay for dedicated support for their products. That's precisely what I'm saying: I feel evolution is somehow halted by this greedy companies making things different for their own interests that many times collide with those of the users.
namnibor
16.05.2013, 02:05 AM
Well there you all go on about connectivity but software doesn't need it in the physical sense & as for a few different plug in formats, compared to hardware it is nothing. I also believe that midi has also been updated by several people, yamaha & roland spring to mind but has never caught on.
I know a lot of people that are quite happy to produce using only there laptop & if all you are using are virtual instruments then it is quite feasible.
It is a good time to make music indeed!
Now realize I may be coming from my Psychology college training when I say this but it also is very much a basic Humanistic stance as well in that there is something to be said about a great physical, hands-on interface and turning physical knobs, even if realistically for quite some time now, that very physical interface is and has been controlling software, that to me would be the difference in trying to convince anyone that stepping into Woody Allen's "Orgasmatron" from his early and best movie IMO, "Sleeper", is better experience than having real intimacy in all it's unpredictability (ideally); where "working solely within the box" would leave me with "I can't no satisfaction" ringing in my soul!
Perhaps a strange analogy, but I seem to do strange well on this planet!
The fact that the modular movement has really had a surge of avid interest to point that MFB and other former and present hardware co.'s are making modular units, contrasted by Arturia, whom had done software in beginning and now venturing quite successfully in hardware, makes me believe that perhaps we very well could be looking at an evolution that meets BOTH desires in the middle, with Native Instrument's hard drive release to even their Maschine, et al; perhaps we shall see more software companies leaping OUT from "the box" to hardware??!!
We are all fortunate in any case to be living in such technologically creative times.
Just remember this: Winsor & Newton, long-time artist's oils, watercolors, and art supply makers are in NO way fearing the demise of the actual painter and his or her's interface, the canvas, from becoming extinct and the same goes for those touring musicians that have to entertain the crowds, of which their attention span probably would not these days be sated by such stage presence of static musicians in front of screens. The general public probably would not be that entertained by such automated music that follows a piano roll in a live situation...but who knows?
Freudian or not, I just happen to like knobs...and uh, switches!!:rolleyes:
Yes, but even though VST can be considered a 'standard', the files are not exactly the same for different operating systems. I mentioned Reaktor because the same ensemble (.ens file) can be used in any operating system. This changes are handled on the platform that runs the plug-in itself, leaving the plug-ins out of that equation. Such a move would be more then welcome - as it would allow developers to focus on what's important.
I think the only product that has a good enough excuse to have its own plug-in format is Reason - since there's a clear advantage in making the plug-ins compatible with the rest of the rack. Apple clearly has some very aggressive way of making their users pay for dedicated support for their products. That's precisely what I'm saying: I feel evolution is somehow halted by this greedy companies making things different for their own interests that many times collide with those of the users.
The ensemble file and file format is similar in some ways, yet ultimately very different than a published software development standard like the API used for a technology like VST. I can go into a lot more depth on this subject and software architecture in general than might be welcome in a music forum, but suffice it to say that the ensemble file format is more analogous to providing compatibility with a document format like word documents among word processors than it is a development standard.
Basically Reaktor is an engine and programming environment, developed by one vendor (NI). In order for them to take that to a more "vendor neutral" level, they would have to publish a complete programming API that could potentially be implemented, license free, on any operating system and any hardware and use any language. Reaktor is NOTHING like that right now, it is provided to us only by NI and any instruments we buy are only valid as long as NI is in business unless they transfer all rights to someone else just before closing shop.
Also, with that, comes a huge amount of processing overhead and also sonic limitations. I'm not saying Reaktor synths don't sound good, but maybe you saw my recent post asking why a single instance of Prism, while no notes are being played, consumes a relatively huge amount of CPU?
The reason VST files are incompatible between operating systems is because the API standard describes a specification... a protocol if you will of doing something. It does not mandate where that instrument can run or by whom. Therefore, instruments can be written literally in any language as long as they conform to the API spec. Instruments written in a native language are always going to perform much better in terms of system resource consumption (think CPU cycles and RAM) than instruments that run within a synth-development toolkit like Reaktor.
Are you familiar with Synthmaker? http://synthmaker.co.uk/
Similar idea to Reaktor. A toolkit that helps the creator focus on the specific synth implementation without worrying about the bits and bytes, low level coding and formal computer science knowledge required to write synth code from the ground up. But the result? For example the now legendary Sylenth1 started as a Synthmaker synth. Everyone loved the sound so the first order of business was to re-write it as a native synth. The performance improvements of doing so were huge, even though in that case only the UI was scripted in Synthmaker.
If you are familiar with it, you might remember everytime a good synthmaker synth came out, many folks would post "when are they gonna rewrite this native so we can get the most out of it"? Its all about CPU and overall performance (like the way it feels when you tweak a controller, even nanoseconds of latency result in diminished user experience).
I hope I'm not insulting anyone's intelligence with a bit of geekspeak here, but the fundamental difference comes down to scripting (or dynamic) languages versus strongly typed languages.
Short version of the story: JavaScript, Python, LISP, PHP etc always end up with shit performance because they are scripted and have dynamic typing. C++, C#, Objective-C, Pascal/Delphi (FLStudio!) etc are what's called strongly typed languages. They are much harder to write in, thus the coding labor and knowledge requirements are higher, but the payoff is much more efficient code.
The VST standard is a native-language API. It is possible to fuck things up, even with a written spec. I worked with the lead developer of SynthMaster (which is now gaining on Zebra in notoriety) to fix a bug in where it was crashing in FLS and some hosts due to a threading issue, because he was not properly adhering to the Steinberg spec. In other words it appeared to work, but did not adhere to ALL the rules so the devil was in the details.
I don't know if any of that makes sense or not but I was trying to illustrate why an API like VST is more powerful than a modular synth engine like Reaktor. At a licensing level they are also different because for Reaktor to be analogous, it would mean NI allows anyone to write their own player without using the Reaktor application (correct me if I'm wrong but I don't think that's even a glimmer in their eye).
Anyway, backing up a bit I'm still not sure what exactly that has to do with Access' incompetence with proper audio streaming over USB. No standard can really solve that problem. My example above with Synthmaster shows that publishing a standard does not guarantee that the synth vendor adheres to the standard perfectly, and more importantly allocates proper QA resources to test and verify compliance.
Berni
16.05.2013, 03:47 AM
What I'm trying to say is that the cheap laptop has not only replaced the crappy guitar that most contemporary musicians started with but also the budget studio they cut there first demo on & a lot of the instruments they wanted to try but could never afford. Once you learn a decent daw & all it's shortcuts & instruments you dont need big expensive boxes with lots of knobs & switches & it is more natural to use what you learned on. There are millions of people out there producing some really cool electronic music with just a DAW & some speakers. Making music is all about heads & hearts not gear.
Access blowing us away with the next big thing? My Arse! I think they came to the same conclusion I did quite a few years ago. There just riding it out now :p
I have an Apple macbook pro & can run any plug-in I want too on it 32 or 64bit on Lion in any host that I have.
I can also create great works of art without slinging mud at a canvas :p
Wake up this is 2013!
What I'm trying to say is that the cheap laptop has not only replaced the crappy guitar that most contemporary musicians started with but also the budget studio they cut there first demo on & a lot of the instruments they wanted to try but could never afford. Once you learn a decent daw & all it's shortcuts & instruments you dont need big expensive boxes with lots of knobs & switches & it is more natural to use what you learned on. There are millions of people out there producing some really cool electronic music with just a DAW & some speakers. Making music is all about heads & hearts not gear.
Access blowing us away with the next big thing? My Arse! I think they came to the same conclusion I did quite a few years ago. There just riding it out now :p
I have an Apple macbook pro & can run any plug-in I want too on it 32 or 64bit on Lion in any host that I have.
I can also create great works of art without slinging mud at a canvas :p
Wake up this is 2013!
Not only that but there are just as many options for controlling software now as any hardware interface can provide. The other day I was looking through some old posts here where someone said with a mouse you can't control two params at once like you can with two hands and a knob. And it was said in a conversation with me. How did I let that guy get away with that ? ;) I think in that post I used the example of an X/Y pad like in Zebra to do same (which it does), but now days I can have my hand on a mouse controlling X/Y pad or on the pitch/mod stick (or both), or a few fingers on a few sliders, etc.... The limits are purely mental in nature. Honestly I find it much easier to control the Ultranova VST via generic keyboard mapping than it is to use the knobs on the Ultranova itself, with the only possible exception being the filter sweep knob (which is dead easy to grab on that particular device given its enormous size).
But at the same time I do kind of understand the mystique around a hardware instrument. The physical interface is designed with that particular instrument in mind, thus a relationship between the two is created that is unique and is kind of what makes that instrument what it is. Similar to guitars, they all have 6 strings (er well mostly), thus they are not a ukulele. But wait, a bass guitar has the same number of strings as a ukulele. What makes them different? Physical placement and other physical characteristics that define one instrument from another.
So I kind of see both sides.
But I do agree with you that software has eaten hardware's fucking lunch over the last 5 years or so, and the fact that Access has not responded with realistic price points indicates head up the arse syndrome big time.
Berni
16.05.2013, 04:33 AM
the same goes for those touring musicians that have to entertain the crowds, of which their attention span probably would not these days be sated by such stage presence of static musicians in front of screens. The general public probably would not be that entertained by such automated music that follows a piano roll in a live situation...but who knows?
Freudian or not, I just happen to like knobs...and uh, switches!!:rolleyes:
Ever see kraftwerk live?
Ever see kraftwerk live?
I did, but only in dreams and videos.
Modern finds on YouTube, by the way, explain away the main reason I never did well during stage musicianship:
http://www.youtube.com/watch?v=UPSwGj45gxM
No matter what, it always seemed like I ended up looking for a fellow band mate that was feeling a little too confident, then the mixed martial arts side of me took over :)
Must watch that one to the end.. heheh.
There was a reason that electronic music in the 80's needed to be highly sequenced or at least put on tape in a room not susceptible to spontaneous grudge matches :cool:
Berni
16.05.2013, 06:02 AM
The reason why I am so glad non of my early band gigs where never on video :) Thanks for sharing!
TweakHead
16.05.2013, 09:20 AM
@MBTC
I'm actually glad someone put on the "geek" hat and made things so clear.
I was using Reaktor more as a metaphor, well aware of its limitations and that it doesn't qualify as a computer language, let alone a standard.
I'm not even remotely as educated as you are when it comes to computer languages. What I was trying to say - if it even makes any sense at all - is that I would like the differences between formats to be handled at the DAW level.
Why? So that developers wouldn't have to port their creations a couple of times.
I don't even know if such a thing would be possible or not. Why do I think this has to do with Access? Simply because - based on what users say - the performance changes when you switch platforms and the host software. And I imagine it's no easy to task to keep up with all the changes. I mean, Waves Audio and all the others had to put up some work for their plug-ins to be Lion compatible, right? Even though we're talking about the exact same format here which is "Audio Unit" (I think even the VST ones had to be updated to on the mac).
I imagine that programming a synthesizer such as the Virus is no easy task and the integration could be easier if only things would look the same everywhere.
Now, it's true that there's many people producing music with just their laptops. I started like that myself, even though I'm more into desktop computers myself :twisted:
So I actually come from that background myself. I know all about "completely inside the box" music making. I don't even have a traditional music education at all.
But going the all way and say: who needs physical interfaces or instruments these days? You'd have to be nuts, seriously. We were talking about MIDI controllers the other day, right? There's plenty of them out there, but almost all of them feel cheap and require either very specific assignments to be made and saved, or you simply can't get the level of control you get with a dedicated instrument - and that's a fact.
@Berni
I to have a computer that can handle pretty much anything. But wouldn't you like to have a Prophet 12 next to it? Come on dude... You know you would, even though it's 2013! ;)
TweakHead
16.05.2013, 09:42 AM
And I honestly feel that there's some wrong assumptions being made here:
first, namnibor's approach to music making is just as modern as the "inside-the-box" one. There's a reason that "analogue" is making a huge come back these days. Most of the guys that have grown in a software environment such as ourselves used some kind of "emulation" of classic hardware one time or another and developed a lust for those instruments and are willing to try the real thing now.
second, take a look at second hand market even for "virtual analogues". Do you honestly think that Discovery DSP Pro sounds like a real Nord Lead? I don't. And to be honest, such a collection of synthesizers like namnibor owns puts almost any plug-in collection to shame in terms of sound quality - and I'm experienced enough to know that, and so should you. I don't know the technicalities behind it, I just know it sounds better.
third, it feels a lot better to. having a dedicated interface makes you take the time to learn the thing inside out and there's plenty of creativity involved in just combining the powers of multiple machines together, let alone the modular analogue "euro rack style" stuff.
4th, analogue modular is also very much alive and doing well, and there's a reason for it. it sounds better, it offers possibilities that software can't even dream off so far. Because, let's face it: running a new high end software filter with zero feedback latency is very demanding on the CPU, now imagine coupling that with crazy emulations of weird circuitry that just messes up the CV and being able to assign that anywhere you want. Think about it. Diva sounds wonderful but brings any CPU to its knees in "divine" quality. So, even though our computers can handle it, it's still a world apart from a real heavy duty analogue setup in terms of sound and options.
I think it has some ins and outs, just like anything out there. Being able to automate everything, to process sound quickly and with just about anything, was a revolution. But if you take pride in your sources, there's simply no argument here: hardware still sounds better but it's way more expensive, specially when you're talking about modular stuff.
Take a look at Make Noise stuff, for example ;)
I was using Reaktor more as a metaphor, well aware of its limitations and that it doesn't qualify as a computer language, let alone a standard.
I'm not even remotely as educated as you are when it comes to computer languages. What I was trying to say - if it even makes any sense at all - is that I would like the differences between formats to be handled at the DAW level.
Why? So that developers wouldn't have to port their creations a couple of times.
In some ways my comparison of a synth-dev language versus a traditional down-to-the-metal language isn't completely ideal, or maybe difficult to put into context.
But believe it or not the challenges would be the same. For example, Java and the virtual machine technology created when Sun owned it was designed to do just that -- allow code to be written once that could run anywhere without a special port to every OS flavor. Write-once, run-anywhere was the promise. What it would require, however is what's called the JavaVM to be present on that OS. Microsoft has something similar that is known as the .Net Framework.
In the scenario you've described, each DAW would have to have something like the JavaVM or .Net Framework embedded in it, let's call it the CrossPlatformSynthHost or CPSH for purposes of this thread :).. Anyway, someone would have to design, implement and own the rights to the CPSH implementation itself. Of course, nobody wants Steinberg, or Apple, or Cakewalk or Ableton or whoever to own the core technology, it would create a monopolistic scenario. This means they would need to start by forming a vendor-neutral committee (haha!! Now we are talking red tape, a big slow machine than gets very little done). Anyway each committee would need representatives. They would spend all their time in meetings every day, arguing about the way things should be done as each of them tried to sway the others in some way that is beneficial to their organization (at the end of the day, these organizations are there to make money, right? And that means staying competitive. They MUST compete, by definition).
Politics aside for a minute, and getting back to the cross-platform challenge. Java as a technology has been around for something like 20 years. Yet, most of us own very little software that's written in it. Why? Mostly because it performs badly compared to specialized languages that allow better optimizations. With a cross-platform language, you tend to end up with a least common denominator effect, where the language itself is limited by the fact that it cannot do device-specific things and take full advantage of the OS. There's a reason DAW software is not written in Java. The performance sucks. There is a saying in software engineering: "Portability is for canoes"... it just means whenever you make a technology portable across systems, there are going to be limits to that technology that are always begging to be broken through.
Then, back to corporate politics for example. Android devices are largely based on what's called the Android SDK, which is Java-based. It probably seemed like the logical language of choice at the time, what being cross-platform and all. Android is a highly fragmented OS with lots of devices that are incompatible, so a write-once language probably seemed appealing. Long-term, what happened, is that Oracle Corporation bought out Sun Micrososystems (the original creator of Java). They perceive Google (original creators of Android) as a hostile competitor, so once they owned the Java technology they decided to sue Google for using the technology that they *NOW* owned (operative word being NOW). Long-story short, big headache for Google, that choice to use Java.
So yes, this is well into the geek-speak arena and I hope that doesn't sound like I'm reaching out of scope to illustrate a point, because I truly believe it's all relevant to the cross-platform synth host idea. It's a good idea and would be great if truly viable but it would be plagued with performance problems and political problems by very definition. That's the reason it hasn't happened, is all I really wanted to say.
I don't even know if such a thing would be possible or not. Why do I think this has to do with Access? Simply because - based on what users say - the performance changes when you switch platforms and the host software. And I imagine it's no easy to task to keep up with all the changes. I mean, Waves Audio and all the others had to put up some work for their plug-ins to be Lion compatible, right? Even though we're talking about the exact same format here which is "Audio Unit" (I think even the VST ones had to be updated to on the mac).
I could be wrong, but I think the performance differences people report are not really problematic due to the plugin standard used (VST, AU, etc) but are more due to the vast number of variable factors between the hardware and the plugin. Things like other devices sharing the same USB bus, creating latency challenges. Things like the ASIO driver and audio interface type... we didn't talk about ASIO but again its just a published standard like VST (although in the case of the Virus that's no excuse because they include their own audio interface!). Even something like the type of USB cable or whether the ports are directly on the motherboard versus a dedicated card can make a big difference.
The vast number of systems out there, running vast numbers of OS flavors, with vast number of combinations of other devices connected to them can add some real challenges to hardware testing. I'm certainly not going to make excuses for Access, because I think if they properly allocated development and QA resources to the project, proper integration could be achieved, no problem. The UltraNova is proof it can be done, I don't hear lots of complaints about the integration with the plug-in from anyone regardless of platform with that product. And the price point of the UltraNova proves that it's not some monumental task that's going to break their bank to simply develop and test their software properly.
In my opinion it simply reveals a management problem. Not enough resources of the right type being allocated to the right task. Its not a high enough priority for them if people are still willing to pay $3k for a synth that performs great as a live instrument but is poorly integrated.
So, technical solutions to a management problem almost always fail. There's only one way to solve the problem and in my opinion, new connectivity standards or plugin standards etc. are not the issue.
I imagine that programming a synthesizer such as the Virus is no easy task and the integration could be easier if only things would look the same everywhere.
I do understand what you're saying, and to some extent consistency is great. I actually wish that all these vendors didn't have to republish their software in a gazillion different formats (VST, AU, RST etc) and the world would just accept one of them and that's that. But even with VST as the dominate standard, that standard evolves. Steinberg has the VST3 standard, yet most vendors still publish to the 2.4 standard, because theirs not really a revenue motive to implement special features of VST3 (I wish they did because hardware controllers work better with them among other things).
It's just that, as other vendors show that integration is perfectly viable and the technology is already there to do it properly, how do we convince Access to stop dawdling, acknowledge and fix the issues that exist, and do whatever it takes to man up, have some responsibility for delivering the Total Integration they promised and that many people paid for, and make things right? If Chris Kemper wants to go and play with his guitar amps for now I don't know what we can do to pull him back into the synth world to give the Virus product line proper attention unless he sees a direct negative impact to sales of the lackluster integration.
Wish I had the answer, but I don't. I have lots of opinions but no real solution to offer for that particular issue :neutral:
TweakHead
16.05.2013, 08:11 PM
That's a very good answer. Hats off! :cool:
If Chris Kemper wants to go and play with his guitar amps for now I don't know what we can do to pull him back into the synth world to give the Virus product line proper attention unless he sees a direct negative impact to sales of the lackluster integration.
This last part is funny! And that's because it's totally true. I mean, it's not a natural thing to leave a product that has problems to be solved in the shelf. Most specially if this product has such a high price as it does, making people expect better support for it.
And at least for some people, they're doing things wrong. I think the new specs on the Virus ti are very appealing. We've debated this over and over here, all right. But the fact is, seems like most of us are just waiting for them to fix this issues and even a new product that not only solves the problems but makes the Virus a more competitive product in a market growing rich in digital instruments that deliver good sound at reasonable prices. This is the issue, I think.
I thought the UltraNova only had a software editor that worked in stand-alone mode. Didn't know you could select it just like any other plug-in like you do with the Virus and that it could stream the audio through USB. If they do that and it works fine, even though it's a monophonic synthesizer, it puts Access to shame. I mean, it's called the TI = total integration. Not like it's only a detail for them.
Always thought that the TI thing was a big challenge and that people were a bit unfair. But if there's other products doing exactly that and with no issues at all, it's a different story. What I think still holds it for them is that the synthesizer's sound is plain gorgeous. But the specs alone don't really cut it for me, owning the C and all, to spend another big buck for the next Virus - unless 100% sure all bugs are fixed. Still, I'd rather wait to see the new offspring - whenever it comes - because a company can't be relying on the same product for so long without putting another thing on the market.
It's kind of crazy this whole story. I mean, for me it's still the best Virtual Analogue out there when it comes to sound and synthesizer specs. The synthesizer's engine itself is really good and bug free. It's a real charm to program and a very deep machine, otherwise there wouldn't be such a thing like this forum. Why screw up on this integration thing? Beats the hell out of me, and your answer just made things more clear. You and Berni were right all along about this. I'm an atheist now :twisted:
That's a very good answer. Hats off! :cool:
This last part is funny! And that's because it's totally true. I mean, it's not a natural thing to leave a product that has problems to be solved in the shelf. Most specially if this product has such a high price as it does, making people expect better support for it.
And at least for some people, they're doing things wrong. I think the new specs on the Virus ti are very appealing. We've debated this over and over here, all right. But the fact is, seems like most of us are just waiting for them to fix this issues and even a new product that not only solves the problems but makes the Virus a more competitive product in a market growing rich in digital instruments that deliver good sound at reasonable prices. This is the issue, I think.
I thought the UltraNova only had a software editor that worked in stand-alone mode. Didn't know you could select it just like any other plug-in like you do with the Virus and that it could stream the audio through USB. If they do that and it works fine, even though it's a monophonic synthesizer, it puts Access to shame. I mean, it's called the TI = total integration. Not like it's only a detail for them.
Always thought that the TI thing was a big challenge and that people were a bit unfair. But if there's other products doing exactly that and with no issues at all, it's a different story. What I think still holds it for them is that the synthesizer's sound is plain gorgeous. But the specs alone don't really cut it for me, owning the C and all, to spend another big buck for the next Virus - unless 100% sure all bugs are fixed. Still, I'd rather wait to see the new offspring - whenever it comes - because a company can't be relying on the same product for so long without putting another thing on the market.
It's kind of crazy this whole story. I mean, for me it's still the best Virtual Analogue out there when it comes to sound and synthesizer specs. The synthesizer's engine itself is really good and bug free. It's a real charm to program and a very deep machine, otherwise there wouldn't be such a thing like this forum. Why screw up on this integration thing? Beats the hell out of me, and your answer just made things more clear. You and Berni were right all along about this. I'm an atheist now :twisted:
Yeah the UltraNova editor can only run as VST/AU inside a DAW, you cannot launch it separately. Sometimes you hear people say "no standalone mode" and that gets misunderstood as thinking that the synth is not a standalone instrument and requires a computer... its not, its actually designed for live play in the true synth sense, you don't have to ever use the USB cord if you just want to gig with it, same as Virus. But some people don't like the idea of loading up a DAW to edit sounds, they want it to work like Maschine where you can just launch the Maschine editor. To me its not a problem, I've always got a DAW loaded anyway.
Even without the UltraNova/MiniNova as examples, the fact that there are many audio interfaces out there that stream audio from regular audio inputs to the computer via USB goes to show that USB is perfectly capable. Now granted the more instruments you have, the sheer amount of sound data being converted from analog to digital might saturate the bandwidth capability of USB (this is why higher-end audio interfaces tend to use Firewire or in some cases Thunderbolt), but I believe USB should be able to handle audio streams of the typical home musician. The higher bandwidth need is more for multiple instruments recording at one time.
That makes me wonder though... I think the TI2 only offers 3 stereo audio outs to the DAW if I recall correctly? How many multi-timbral parts could you have? It seems like only 3, at least as separate audio streams.
That could be one reason people run into issues with the Virus and not the UltraNova / MiniNova, because the UN is mono-timbral, thus it never needs to send more than 1/3 the max bandwidth stream of the Virus through USB. A lot of people have pointed out that there's nothing really stopping Novation from making the UltraNova multi-timbral, since its purely a software feature with a virtual analog synth like UN or Virus. But maybe they decided to err on the side of reliability rather than feature set? Who knows, maybe they tried to make it multi-timbral and realized the wonky nature of USB is going to put them in a dilemma like Access currently is, so they just made it mono-timbral and kept the cost low.
I wonder if the Ti2 was used with only one audio out would so many people still have so many problems with it? Maybe those that don't are using only one part at a time? I don't know, I seem to remember having problems even with only one audio out with the one I briefly owned. Besides, it would be a damn expensive mono-timbral synth up against the Ultranova as competition.
TweakHead
16.05.2013, 09:56 PM
I'm not much interested in using the multi mode myself. Unless we're talking about preview purposes, of course. For recording the audio I'd rather have all the voices available for each patch. And surely wouldn't mind if they cut that feature all together if only things would work in a very reliable way. I think the integration has obvious benefits: being able to automate parameters is an obvious one, saving the patches used inside the projects another. Editing on the screen is useful for synthesizer with complex structures and many options like the Virus ti. Don't really miss it for my C - and if I did there's some options out there, so that's not really the bonus here. The other factor is the one you pointed out yourself: having a great sounding synthesizer that doesn't hit the main cpu that shares the best of both worlds: it behaves as an instrument and it's still easily configurable inside the software environment we use to make music with. So we're talking high end quality that spares our main cpu and we're talking functionality and convenience.
Unless Novation has some wild genius that managed to make at first attempt what Access has been struggling with for so long without success (which I find hard to believe), I'd say it's shocking that other developers haven't implemented this on their own products.
About the other thing I talked about. It would be possible to have a card similar to Universal Audio but open to third party developers, right? That would be amazing. Maybe a good incentive for developers to implement more demanding code without worrying about hitting our main cpu to hard - like it's happened with Diva. Noticed that their first update was mainly focused on that, adding the capacity to use more then one cpu core I think (but not sure). This is a big factor when it comes to software, right? I have no doubt that if this wasn't the case, software would easily compete with just about anything when it comes to quality. I've seen some texts about current DSP theory (again, not nearly as versed as you are in such matters), so I have a general idea of where we stand today. I mentioned Discovery DSP pro and failed to mention they've just implemented "zero feedback latency filters" on their last update - which brings it closer to the level of more recent offers out there. There's a lot of them doing that: Diva, Monark - even though this one is a Reaktor instrument, if you take the time to dig through it, they've locked access to the filter (sad, very sad but true) -; Madrona Labs's Aalto, Sonic Academy's ANA synth, Waves Audio's Element, Lush 101, etc. All of these have modern DSP (demanding) code into them and all of them present better representations of self oscillating filters that we wouldn't even dream about a few years ago. Funny how the good old Moog inspired filters on the Virus (from C onwards I think) still hold it compared to even these new offers. But not by much I'd say.
Which one of these you guys like best? (interesting subject, no?) :) very cool thread btw
About the card that's open to third party developers... that was what the TCE Powercore cards were supposed to be. You might remember there is a Virus plug-in for it, that runs off the DSP on the PCI card or FireWire unit. But TCE stopped production on the product, not sure why. The Virus plug-in was based on the features on an early model (Virus A or B I guess), and supposedly for all practical purposes was just like having Virus integration that actually worked. Anyone have any info on why progress on this ceased? It would be really unfortunate for those who invested heavily in them (as I recall they weren't cheap, and you had to pay for each Virus instance you wanted loaded, for example four instances were the equivalent of having four Virus keyboards but the license cost was x4 as well, in addition to the hardware Powercore itself).
What's funny is that throughout all this, GPUs have made insane progress over the past five years, achieving the type of performance gains that CPUs used to, but what's more is they excel at parallel operations, the one thing that CPUs can do, obviously, just apparently not as well as GPUs do. Toolkits like CUDA SDK for NVidia chipsets allow writing pretty much any sort of application that runs on the GPU and utilizes that instead of the CPU. I'd hazard a guess that the average low-end gaming GPU (think $100 graphics card) has the computational power to run circles around the DSPs used in devices like the Virus (although it's a little hard to compare them on paper).
I really don't know why the GPU is so under-utilized in audio processing. It is an amazing hardware resource that all but the lowest-end desktops and most laptops already have inside the box. It could be because the GPU market is split (unevenly I might add) between offerings from NVidia and AMD, although NVidia has been the clear leader for some time and I believe has greater market share. I know there were a few plug-ins available that ran on CUDA, I actually downloaded and tried a convolution reverb plug-in that never seemed to take off.
You can see some of the non-gaming related audio & video apps that run on CUDA rather than the CPU here:
http://www.nvidia.com/object/cuda-apps-flash-new.html#state=filterOpen;filter=Video & Audio
TweakHead
16.05.2013, 11:50 PM
http://gpuimpulsereverb.de/
there is this one which is available already! For this type of reverb it's actually pretty cool that it doesn't hit the CPU. Shows what's being wasted ;)
http://gpuimpulsereverb.de/
there is this one which is available already! For this type of reverb it's actually pretty cool that it doesn't hit the CPU. Shows what's being wasted ;)
I remember trying that one a couple of years back. Seems like I ran into some issues with it, but I don't remember what they were. It looks like an actively maintained project, so maybe I need to give it another try.
There was another convolution reverb I tried, but I don't recall the name of it. It was a pure CUDA implementation (whereas this one seems to work on both Nvidia and AMD GPUs).
Berni
17.05.2013, 05:14 AM
If Chris Kemper wants to go and play with his guitar amps for now I don't know what we can do to pull him back into the synth world to give the Virus product line proper attention unless he sees a direct negative impact to sales of the lackluster integration.
This is very funny & not at the same time. It's bad enough they have only 1 guy on support (poor jorg) but only 1 programmer as well that seems to have left the project like the proverbial rat from the sinking ship. The writing has been on the wall for some time if you care to read it. It's a damn shame if you ask me but you would have to be out of your mind to buy a new virus at this point unless of course you don't care about the TI side of things.
Well if they really have only one programmer, that's part of the problem right there. That would just mean Kemper is a greedy asshat, as much money as he makes per synth and he can't afford to staff better than this?
Is that confirmed true or speculation about only one guy handling all aspects of code? If true it would also be confirmation of what I said recently about Virus integration not working being a resource allocation problem, just a much more basic problem than I originally imagined (i.e. not even quite a resource *allocation* problem, but one of just not having proper resources at all).
What you said is why I find it so hard to bring myself to do it (buying a new Virus). But I thought most folks were reporting better luck with integration lately, so I thought it improved?
Berni
17.05.2013, 06:01 AM
All I know is there have been no updates to the new OS5 since january which brought a whole new bunch of problems with it with only a few enhancements to the previous OS & that took over a year to deliver. If there is only one guy working on it, it's at the weekends.
feedingear
17.05.2013, 02:12 PM
If i could harness my gtx570 along with the i72600k and my 16gb of ram, i would be a happy happy man.
Try the reverb download Tweak posted. I'm going to try it soon, possibly this weekend. I remember it crashing on me a few years ago with an error, but its been patched many times since then and that was inside FLStudio, a DAW that not everyone tests their plugs-in for.
Here's the other reverb plug-in I tried. It is NVidia-only though:
http://www.liquidsonics.com/home.htm
Supposedly also Nebula 3 Pro, which is a multi-effect plug-in, has CUDA (NVidia) support:
http://www.acustica-audio.com
TweakHead
17.05.2013, 05:26 PM
I want a synthesizer that can make full use of my graphics card! Make it 20 times more powerful then Diva! :twisted:
I don't see why someone hasn't yet, honestly. Maybe they're worried about building technology that's dependent upon a sole chip vendor like NVidia, but in concept its not too different than something like powercore, only way more advantageous for them since NVidia GPUs are already so prevalent. A lot of people have them and don't even realize it.
The rig I'm typing this on right now has a gtx690 with 4GB of memory on the card! That's more than a lot of folks laptops have for primary RAM! :) Talk about an under-utilized hardware resource. Nuts that there isn't more audio stuff available for it.
On the other hand, we could be using our GPUs to mine Bitcoins, ultimately putting cash in our pocket for frivolous expenditures like overpriced synth hardware :) :)
TweakHead
17.05.2013, 11:16 PM
Another thing I failed to mention was oversampling. Higher sample rates prevents aliasing and the so called artifacts noticeable more in high frequencies, but that's also more demanding on the cpu. It's become common to have an option for the amount of oversampling we want to use in software synthesizers, and virtual analogues also include that in their detailed specs. That's also another feature that requires a lot of processing power. The technology is there and it's easy to implement, I think, but isn't pushed to its limits because the developers have to compromise between quality and cpu hit. So as our processors got better, we get a better taste of what can be done with software. The other feature I find to be particularly cpu intensive is unisson. Then there's always multiplying all of this by playing many voices at once, of course.
Read somewhere that our gpus are still working on 32 bit and that 64 would be ideal for high end audio applications. Again, can't even approach if this is true or not.
But do you think it would be possible to have a real high end synthesizer, multi-timbral, with a very high internal sample rate, state of the art filters and all of that running from our graphics card?
I personally think yes, a lot of the more powerful graphics cards could run multiple instances of mega-synths and FX. Actually three or four years ago I thought this would have been farther along than it is. In truth though, I know very little about the CUDA SDK and I've read it is cumbersome and difficult to program against. I think NVidia would really need to find ways to make audio applications easier to create, although I say all of this without even having taken a close look at CUDA from a programming standpoint (pure speculation).
There is a big disparity in the amount of power across cards. The one I mentioned in this PC (gtx690) is a monster, extremely powerful dual GPU card.. the one I have in my actual music PC is a much more modest low end card (gts450), so there would be a big difference in music making potential of each of those, just as there is a big difference in gaming performance.
I'm not sure about any aspect of GPUs being 32 bit though, or at least not sure what that might refer to, specifically. The bus on some of these NVidia Kepler-based graphics cards is 256 or 256 x 2 in the case of dual GPU cards, and of course both 32 and 64 bit drivers are created for them.
However, I'm sorry to say I'm not holding my breath, because CUDA technology has been available for this type of use for 4-5 years now. The graphics cards have gotten phenomenally faster and more efficient in power usage, however utilization of them for anything but graphics seems to be at a snails pace.
It is promising to see these reverb plug-ins making progress in their development. Reverb is a really CPU-gobbling and commonly used effect.. in fact probably if there was one I would identify to offload to a secondary processing mechanism, that would be the one.
TweakHead
18.05.2013, 12:38 PM
It is promising to see these reverb plug-ins making progress in their development. Reverb is a really CPU-gobbling and commonly used effect.. in fact probably if there was one I would identify to offload to a secondary processing mechanism, that would be the one.
Yes, but why stop there? If we take a look at Universal Audio's offers, some of them are quite demanding as well. Simply because they can afford the luxury of using more complex code in their products that would otherwise be a nightmare to use for mixing purposes. Actually, I wonder why Universal Audio hasn't put out some synthesizers yet. But maybe they will if they pay attention to the market. I mean, most people these days are demanding more and more quality in their instruments. That's the sole reason analogue has returned - quality and interaction with the instrument of course. And all of that without making the cpu ask for mercy.
Yes, but why stop there? If we take a look at Universal Audio's offers, some of them are quite demanding as well. Simply because they can afford the luxury of using more complex code in their products that would otherwise be a nightmare to use for mixing purposes. Actually, I wonder why Universal Audio hasn't put out some synthesizers yet. But maybe they will if they pay attention to the market. I mean, most people these days are demanding more and more quality in their instruments. That's the sole reason analogue has returned - quality and interaction with the instrument of course. And all of that without making the cpu ask for mercy.
I wondered about this as well. When something like the quad-core Apollo interface costs something like $3000 with the thunderbolt card, why not go ahead and turn it into a full-fledged instrument with synth plug-ins that run on it? Maybe they are headed in that direction. I guess they figured all the really good synth developers are already self-employed and creating their own VSTs, and that hiring someone mediocre to develop a synth just to say they have one in their product line up isn't going to result in top-notch brand recognition. Its only a theory but one possibility.
I've been thinking more about why nobody has made a more firm commitment/organization investment to audio on CUDA, and again I've come up with a possibility that would give me pause as a developer. It doesn't mean it's THE reason, it's just one that could be a showstopper: Basically if I, as a developer decide to invest heavily in CUDA (let's say I invest enough man hours learning their SDK, then developing a synth), I might end up spending something like 1000 man-hours, either my own labor or contracted out, to do so. That's a major investment of time, money or both. Nvidia drivers of course are always evolving, and getting updated. They do backward compatibility testing for games every time they do a driver update, but what's to say they are going to add my synth to the list of apps to test for backward compatibility when they do a new driver release? Probably not much at the current stage, because they are in the graphics business rather than the music business. For the type of investment it would take to develop the synth, I would need some level of assurance that they are not going to blow my synth out of the water with a single driver update, and honestly right now they are probably not going to be able to provide that to a synth company. Maybe a larger company could form some sort of partnership with them and get it done, but it would be a very risky move for a small developer to invest so much only to have their eggs in one basket.
The good thing about dedicated audio hardware is that if you've got a stable setup, there's usually nothing pressing that says you must install every update for every piece of software as it comes out, and there's really nothing about the standard automatic Windows or OSX updates that is going to make core changes to the way audio is handled at the kernel level, or something else that could affect code at a higher layer up like the VST API. That's the beauty of the VST API, it is designed for audio technology, thus very conducive to creating synths. From what I hear about CUDA, not so much in its current form. Also, most people will update their graphics drivers regularly just as a matter of maintenance, and would not think about the possible impact to their music setup of updating a newer driver, then having all their synths go tits up or whatever.
TweakHead
18.05.2013, 03:30 PM
You're probably right. Nvidia is pushing for better gaming performance above everything else. I think this CUDA thing is also some form of publicity: they're glad that some scientists find other uses for their powerful hardware and all of that, but they're certainly not making efforts to make their day. The same goes to audio applications. But what about Open CL? I know that's some kind of standard for such things, isn't it? I remember Steve Jobs making a big deal out of it when it was introduced to Mac OS, and thinking: what happened to this revolution? How come I never saw it being used?
OpenCL 1.0 has been released with Mac OS X Snow Leopard (http://en.wikipedia.org/wiki/Mac_OS_X_Snow_Leopard). According to an Apple press release:[6] (http://en.wikipedia.org/wiki/OpenCL#cite_note-pressrelease-6)
Snow Leopard further extends support for modern hardware with Open Computing Language (OpenCL), which lets any application tap into the vast gigaflops of GPU computing power previously available only to graphics applications. OpenCL is based on the C programming language and has been proposed as an open standard.
Same story we're talking about here. The technology seems to be there and we certainly have more power in our machines then ever before, but the industry has interests of its own and it's very hard to come to terms and create standards or give developers the kind of assurance you mentioned. I feel that's the case for almost everything.
Actually, much more is happening with CUDA beyond just gaming applications. Most folks are only familiar with the GeForce line of products, but check out the Tesla line of cards, for example:
http://www.nvidia.com/object/tesla-supercomputing-solutions.html
However, some of their Tesla-line cards cost thousands of dollars. Sometimes you look at one of their scientific-use cards and wonder why, if the paper specs are the same, they cost so much more than the consumer gaming card that uses mostly the same internals?
Part of that comes down to things I alluded to before, like backward compatibility with regard to what its used for. The GeForce line allows them to focus on gamers, and only be concerned with driver compatibility for games. With the Tesla line they can optimize for scientific uses and such, without worrying about goals for the gaming market.
For example, if you've ever looked at a big PC maker like Dell or HP, they typically have their website divided up into consumer and business models. If you look at the technical specs, a given consumer laptop that costs $1500 might be the equivalent of a similar business model laptop that costs $2500 or more. What's the difference? Why wouldn't a business just buy the cheaper consumer version?
Most of that comes down to availability of parts. If you're a large corporation ordering 500 of those laptops, you want to be damn sure that if you need to replace parts a couple of years down the road, that those parts are still available quickly (i.e. overnight shipment) from the manufacturer. Its critical when you have that many assets of the same type in the field. So, with the business-grade models you are guaranteed a certain level of part availability. Consumer models are meant to be sold onesy-twosey and there's no guarantee the consumer can get a replacement part quickly direct from manufacturer past the 1 year warranty, or if there is, it is supplemented with extra warranty cost and wait times.
It might seem we're drifting far from music related discussion here, but the concepts are the same with regard to CUDA. It's definitely a real technology that has valid uses, it's just that the scientific uses are niche enough that they need to pay a premium for same hardware, the primary difference being drivers that are optimized for their use instead of gaming. This would all be fine for audio, except that it would be a hard sell to a synth lover that they need to spend $3k-6k on a GPU to use it as an auxiliary synth or FX processor, the value starts to decline. The real value is in using the GPU that folks already have and I can see some hurdles to that.
feedingear
19.05.2013, 04:41 AM
Try the reverb download Tweak posted. I'm going to try it soon, possibly this weekend. I remember it crashing on me a few years ago with an error, but its been patched many times since then and that was inside FLStudio, a DAW that not everyone tests their plugs-in for.
I'll have a look see - gotta admit im pretty happy with QL Spaces atm for simple convolution verbs. And now at work I am learning to use this - and god damn it sounds good...
http://www.bricasti.com/m7.html
Berni
20.05.2013, 09:17 PM
I'll have a look see - gotta admit im pretty happy with QL Spaces atm for simple convolution verbs. And now at work I am learning to use this - and god damn it sounds good...
http://www.bricasti.com/m7.html
Jeez I hope so, those things don't come cheap! Totally jealous!
plaid_emu
15.06.2013, 03:24 PM
From EvilDragon at the Gearslutz forum:
Hell, can someone please post there on Virus forums that the main reason CUDA is under-utilized in audio processing is its incurred processing and data transfer latency? It's simply a different kind of CPU and doesn't scale well for all kinds of math operations. Not everything can be perfectly parallelized in audio DSP. Urs could say a thing or two about this. heh
I don't want to register there (not owning a Virus either) just to say that. Thanks. :)
I can fully believe not all audio applications lend themselves to parallelism, but considering the amount of polygon rendering that can be passed between multiple GPU threads with insanely low latency, I have to believe there are at least some audio applications that could be handled very effectively using CUDA.
Thanks for posting his message for him, but if he can't be bothered to register (I don't own a Virus at the moment either, BTW), then I'd say there's not much value in a drive-by post like that unless he wants to stick around and discuss exactly what is different about them, and what kind of math needs to be done in audio that GPUs are not good at. I would certainly listen.
EvilDragon
15.06.2013, 06:02 PM
So alright, I registered. :) Hello!
Yes, CUDA is great for highly parallel operations. Convolution will lend itself great to this, that's why one of first audio uses of CUDA (or, better said, GPGPU - we have OpenCL as well) was precisely convolution. There are things which won't work as well, because they depend on linearly executed algorithms - like delays, algorithmic reverbs, lookahead compressors/limiters and most filter designs. Why? Because they depend on previously calculated samples, and with this high level of parallelism that we have in GPGPUs, there is a problem of returning values consistently in time, which is highly relevant for linear operations.
So, this means that you cannot simply port the whole synth structure to a GPU, because things depend on each other - oscillators precede mixers which precede filters etc. This is what would actually cause greater latency than when using a regular CPU which has special registers to help with fast calculation of certain operations (MMX, SSE, AltiVec, etc.). And that is why GPGPU is not yet used for offloading whole synth architectures from the main CPU.
Now, SOME things can be offloaded to them, and when they are done well, it's a splendid thing. For example, the analytic zero-feedback delay filter calculation that's done in u-he Diva could be parallelized on a GPU to great extent - but this is not yet in u-he's plans.
There is also a problem with compatibility and lack of proper standard: we have CUDA, OpenCL and Microsoft's Direct Compute. As if it's not enough to support VST, AU, RTAS and AAX? :)
Hi, welcome, and thanks for registering :)
The problem of returning values with consistent timng is one that GPU developers are fairly accustomed to dealing with I think.... recently there has been somewhat of a spotlight on frame time variance and latency with regard to rendering complex 3D scenes on multiple GPUs.
What's odd about mentioning MMX and SSE for example, as features of a CPU that exclude doing same on a GPU, is that these extensions were originally created to do 3D graphics type operations; something seems at odds there.
I've never developed using CUDA, so to some extent my guesses here are admittedly uneducated, but even given the understanding that some algorithms are simply serial in nature, I still don't see why each oscillator couldn't have its own thread, reverb1 has its own thread, reverb2 has its own, and so on. One of the big selling points of the DSP in the Virus is specialized parallel filter processers, so lets say those are paralleled on the GPU or worst case scenario each filter and envelop gets their own thread.
In other words, to the best of my knowledge most plugins on the CPU today are not achieving more efficient use of multi-core CPUs by using parallelism to solve math problems that are serial in nature, they are simply using additional threads to divvy up the workload of separate features.
I hear ya on the lack of proper standard, although my original thought on CUDA is really the same as the old Virus plugin on TCE Powercore cards... that was certainly not a standard, but a proprietary solution dependent on owning that card. The difference I see with NVidia is that an insane number of people already have these in their systems, going unused for the most part while making music.
EvilDragon
15.06.2013, 07:26 PM
The thing is that CUDA cores really do just relatively simple operations. They are nowhere near the scope of math operations that regular and specialized CPU registers can do. And while MMX, SSE and others were introduced to add to 3D rendering performance, that's not the only thing they were good at doing. SSE and AltiVec do a lot for FFT processing as well.
And ultimately, this is a major difference: GPUs need to calculate a lot of pixels for a couple dozen frames per second. That's a couple dozen. Audio operations need at least 44100 times per second. Or more, in case of oversampling. This is where that "returning values consistently" problem is occuring.
CUDA can do insanely complex stuff. Think PhysX calculations and fluid dynamics. Stuff that makes audio algorithms look like childs play, comparatively speaking.
http://www.youtube.com/watch?v=EeblWU0pV5E
EvilDragon
15.06.2013, 08:41 PM
Really, physics and audio DSP are very different beasts to tackle. What looks visually impressive and "complex" there in that demo doesn't mean that CUDA can do equally complex audio DSP algorithms at the same level of facility...
Perhaps, but the fact that there are audio plugins like reverb that exist proves it can be done. That demo is just one example. Physics calculations such as fluid dynamics ARE complex, period... we don't need eye candy to establish that.
My personal theory is that the real barrier is the learning curve of the CUDA SDK, or perhaps that NVidia has not given proper attention to documenting and/or accommodating certain features, but that the processing capability is fully there. There is a huge amount of unutilized potential.
TweakHead
15.06.2013, 11:42 PM
now if I may make an uneducated guess, I think the point is we've got this cards with some processing on them already in our computers and we're debating here if any use could be found for that little extra that usually just takes a nap while we're making music and audio related stuff. TC PowerCore or Universal Audio have specialized dsp cards, we all know that. But since Apple, for example, sells Mac Pros with a lot of audio people in mind (not only, but also a lot) how come hasn't something like the two examples I mentioned come with such computers as some kind of open dsp card for any developer to make use of? why wouldn't this benefit the industry? it would instantly create a new market for better plug-ins while boosting the performance at the same time. This is a recurrent thought I have. Maybe it conflicts with the market interests, where everyone's trying to pull the profits on their own, I guess. But if the user experience was the priority, such a thing would make perfect sense. We have the super sound cards - even non gamers - in our computers, we could just as well have something dedicated to proper dsp usage, it's just my logical conclusion.
We're seeing more and more technology being implemented as CPUs begin to offer the capacity to work their way into copping with it. Like zero feedback latency filters, which are very demanding to process - I take that for granted, even after some experiments with Reaktor I've done myself. So what I'm saying is: more quality is possible, we're not getting the best we could unless we're paying the BIG buck for it and that renders us totally dependable on some brands and their support and dedication for updates. TC has died, plain and simple. Universal Audio doesn't extend their product's range to instruments, which is somewhat weird for my mind, but, there's this big whole on the market today which could be filled with something else. And the lack of a standard is something we've talked about here, this could all be history with this so called "no brand dsp dedicated cards". my 2 cents
Well in a way I guess we're discussing the same thing -- but that DSP card already exists in the form of a graphics card that isn't used during most folk's music making.
It's not a "no-brand" solution in the sense that one vendor supplies the graphics cards, but as far as I know the software SDK for CUDA is completely free/open and available to anyone... I see lots of hobbyists working with it... example here: http://www.theover.org/Cuda
There's also a tremendously flexible range of power to choose from among GPUs. You can pay $50 for a graphics card or you can pay $1000 depending on how much processing power you want (i.e. think plugin instances). You can buy one of them today and add a second or third later and scale linearly. The guy above is working with low-end (by today's standard) GPUs and getting results.
Now EvilDragon's position is that there are technical reasons that some types of audio applications won't work on CUDA, and I'm not completely denying the possibility, I'm just looking for answers to what they are that I can digest, because the evidence I see seems contrary.
As far as someone coming out with a card that is not proprietary and is designed for open use.... well the problem there is the level of R&D required to produce the hardware and software to do something like that is insane... tens of... no probably hundreds of millions of dollars. If someone invests that kind of money, there needs to be some return in it for them, they cannot just invest it as an act of goodwill to give to the community.
So we end up back at the folks like TC Powercore... someone there bit the bullet and invested some money and took a gamble. Apparently there wasn't enough money in it for them to sustain.
So I think what we are more likely to see are less-ambitious, specialized devices utilizing DSP. Actually that's all the Ultranova is, a DSP with an audio interface and software plugin.... but for a few more bucks why not add keyboards and knobs and call it a synth. Much cheaper to support a single-purpose hardware synth than a complete computing platform like CUDA.
TweakHead
16.06.2013, 12:58 AM
You're saying that this card would cost a whole lot of money and possibly produce no returns. That's possibly true, but only until it becomes a standard - made an integral part of a high end computer. It doesn't seem totally impossible to my mind. Some years ago it wasn't even standard for computers to come with a sound card at all, for example. As soon as they show up, more applications came to fill this increased multimedia potential in computers, not only that but the OS themselves have grown more mature with that move. So I think that, naturally, only big companies could pull this off, like Apple which I mentioned above. They can make the whole world use touch phones, why couldn't they do something like this? And sell the products for their all to special Logic X (more like XI) in their dearest App Store, making everyone make some bucks. I think this open card doesn't mean there can't be competition and profit, that's only relegated to software perhaps. But making such a thing become as ordinary as RAM memory or the graphics card would be great. It could also, I think, serve the gaming industry. We don't often talk about that, but rendering real audio in games in real time still has a long way to go to, there's room for improvement. And the games have proven useful for pushing the industry into greater innovation and for financing it even. Just a thought of course.
But no one would deny this would be useful for the pro user. I mean, the new mac pro they're about to sell looks like a turbine from a space ship, has all the looks and specs of a great machine, except... well, you really need to buy a decent sound card, an Universal Audio card to, maybe an Apogee Rosetta pcie (if you're running a bit shinny biz), so forth and so on. So, let me think it through here... Mac what? Pro? Reaaallyyy? Just an ordinary computer with nothing more then your laptop on it, except more of the same (literaly). If this is what the market economy has to offer us, then I say that innovation is being halted more then helped by it, that's all.
And if we're honest, that's happening every single day. Have you seen an Imac on the inside? They could easily made them more resistant and better constructed but what's the point in making a great product that would last forever? No point! Except, of course, the user. This is were we're at at this point in evolution, but it makes me sick. I like to think that more resources for computing within a computer so as to enable creativity is worth it, simply that. What the economy needs to pull it off is really not the point. I mean, it's done a lousy job at making people keep their jobs in my country, it's out sourcing our most successful stuff to the third world bypassing all sorts of laws that give people their rights, including care for the environment, children in factories, so forth and so on... Screw the economy, we're talking evolution here. We just need to stop pretending that Mac Pros really live up to their name, because they don't. :twisted:
But Apple and companies like them are not interested in creating open-standards, they are interested in highly proprietary devices and defending their design patents like pit bulls.
Its hard for me to understand your position from your message, it seems like it starts off that you'd rather Apple be the holder of a standard like CUDA but then later in the message it seems you're dissatisfied with Apple?
Anyway one of the things that helped Apple get back on their feet, aside from a major influx of money from Microsoft, was when Jobs was put back in charge and he trimmed their product portfolio down to focus on only a few things. In other words I doubt we're going to see them expand into music hardware.
TweakHead
16.06.2013, 04:51 PM
You're right. I was saying that only those companies would be capable of achieving such a treat, but they're focus seems to be else where these days. I'm not totally unsatisfied with Apple. I really like the software and the overall reliability of it. However, that doesn't stop me from judging their current priorities - which is the portable devices, namely phones and tablets.
I just read on the news that Microsoft will release another windows 8 version later this year for the same reasons: most of their costumer base weren't happy with the more oriented towards touch devices interface. They're actually being punished for doing what Apple's been doing, successfully, so to speak.
I think this companies, like you say, are interested in creating "highly proprietary devices" and that doesn't help the user community in many cases. I was also stressing the point that there's a big marketing hype surrounding some products like "high end computer workstations". You need those, plus all the other equipment to release the computer from the "inside the box" form - like another jail - so as to be able to use it creatively.
For graphics you need a graphic pen, for audio you need plenty of stuff. For such a big buck, you'd expect a system with a proper audio card at least. And this card I keep talking about is just another way of saying: I feel there's room for more innovation that would hopefully become a standard. Standards are good for one thing: they mean the vast majority of people gets to use these. The music industry in general fears this massive expansion of people using more serious tools to give life to their own creations and being able to present those online with the same level of presentation and to the same standards that only a few studios could a just a few decades ago. I think of that as evolution, it's good that more people are able to be creative. I don't even care that some people feel they can't make revenues like the rock starts used to. So less ego, and more community. While we see that where you have a community working you get good results: think of how MAX has expanded the usability of Live to the point they've decided to integrate it fully to their software. They know, just as we do, that some hobbyists can actually bring more value to their product. Same goes for Reaktor.
I remember you saying these are not "native" languages. But implementing new hardware with an open language that could be used by any developer out there - and the hobbyists - for bringing more demanding audio (maybe not just audio...) applications to life would be great. If this was shared across all the digital audio workstations out there, a new standard like midi, perhaps, they could all through a big part of their plug-ins processing power to that board and thus produce a big performance boost to all audio workstations. What's wrong with that?
Read on the latest issue of Music Tech's mag - dedicated to Logic Pro - that they'll update their Audio Unit format in ways VST3 already has. This is a double effort for the same thing taking place, another of the sub products of this ego/brand centered economy we live in that also doesn't help the user much. Moving on, another point mentioned is they're aiming for better thread distribution among the cores. That's good of course!
You developers go ahead and tell me. I think our CPUs aren't being used to their full potential just as well, right? It's all good and great if you listen to the marketing hype: 4x more performance in everything gets you thrilled quite fast. But how do these new features in CPU technology being translated into actual performance for the user? Many times it takes some time. One of the reasons I don't jump on the new OS as soon as it shows up: they have this nasty tendency to dive to new features while leaving others behind them, just after they got to the point where they're working properly for the first time ever. I'm not basing this on any detailed draft of information, just on my own subjective experience with computers.
I personally think CPU technology has hit a ceiling (actually its been hovering around the ceiling for many years now). They are adding more cores, which is good for certain types of applications, but as has been discussed here before some computing tasks are serial in nature and do not lend themselves completely to parallelism. Thermal limitations, among other things, prevent CPUs from getting much faster.
GPUs on the other hand still seem to be enjoying big performance gains, generation over generation, while finding ways to do so with lower temperatures and less power consumption. Exciting times in GPU-land, not so much on the CPU side of things.
All of that aside for a moment, and taking into consideration what you've brought up about Apple's diminished focus on desktops: Lots of the buyers of Apple computers for music making purposes go with laptops or iMacs, partially for value but more likely for mobility. DJs and musicians are on the go more than ever. That presents a problem for makers of, for example a DSP card like the TC Powercore since there's no where to put the card. What's more, high-end GPUs need a desktop as well for thermal reasons, better airflow is required. You mentioned the possibility of an external box, but then you we the benefit of a bus directly on the motherboard, and its back to the streaming over USB/Firewire etc. Of course, this can be done, but we've seen it can be flaky and come with drawbacks of its own like latency.
Did you ever look at the Openlabs stuff? They still sell the Meko, although I can't say I know of anyone that uses one. It's kind of like what you've described, a separate box (just happens to have a keyboard) with its own DSP... well actually better than a DSP -- rather a full blown PC running Windows and lots of soft synths http://openlabs.com/LxdPage
TweakHead
18.06.2013, 01:58 PM
Yep, I've seen them listed on music stores. These are dedicated audio workstations, right? I think even Roland/Cakewalk sells some complete solutions as well. But I think it would be a lot better for high end computers - the mac pro is just an example of this, of course - should incorporate some of this high end features within their main configurations. A better, more reliable sound card is just the obvious improvement I'd like to see implemented. I mean, there's great internal sound cards that can be expanded with external interfaces for extra connectivity from vendors like RME or Apogee for example. This two offer really high end low latency, very impressive word clocks for syncing hardware devices and top notch audio quality.
To some extent, what I've been ranting about here is that the standards should be placed higher. It's easy to convince people to buy a new computer based on the looks alone (like the majority of Apple's computers), but if we're serious about making computers better suited for demanding applications, they could just as well come packed with higher quality components overall. My idea of introducing a new component at this "basic configuration level" is another way of saying that we should be getting more for granted based on the price tag of such computers and even that the industry should focus on setting new standards - because ultimately it would help the programmers (not having to translate the same plug-in to different formats) and the users because we'd be getting better software with access to more resources and ultimately a performance and quality boost.
It's easy enough to say this, but if we take into account how the market works it seems a daunting task. I'm going to stress again that it isn't positive when the interests of particular brands surpass the interests of the user or even go against him. Making profit is not only an objective here, but also a surviving necessity - that's granted. But to which extent are we willing to go before we start making more compromises that would allow better solutions to be achieved? It's like politics, diplomacy is much needed here! No one can win alone, and if no one steps back just a little bit, we'll be seeing this plenty of standards, protecting trade secrets versus open attitude - even for users and hobbyists - stretch beyond reason.
I agree with you about CPU, even though the performance has been getting higher non the less.
Cheers
I just saw this article and thought about this discussion:
http://arstechnica.com/business/2013/06/nvidia-throws-open-the-licensing-doors-on-its-keper-gpu-technology/
"In turn, licensees will receive designs, collateral, and support from the company."
Holy shit, how much more incentive does one need? Obviously, this is not something NVidia is specifically targeting audio-related uses for, but to my knowledge there is no other platform (combination of hardware+software dev kit) out there this capable of fulfilling the kind of need we're talking about here.
TweakHead
20.06.2013, 08:04 PM
Nice! Thanks! Yep, I guess they're promoting this seeing that it would be an advantage for them as well - making it easier to reach more devices.
We forgot to talk about tablets and their potential - that will certainly rise - for acting out as synthesizers and what not. Maybe because we feel that the quality doesn't cut it when compared to other solutions, yet. But if this evolves - like it's presumable to do - this can be the answer to our needs. Once again, however, we're talking about super proprietary items here, unless Android or Ubuntu conquers a bright future somehow. There's something to it, though, and most music magazines are paying a lot of attention to it. Some of it is just plain marketing hype of course, but even serious players like Moog have made special products for this new market - and they do sound good and take profit from the technology. Still a long way to go, of course, but it's kind of cool (perhaps nothing more then that XD)...
On another note: do you guys feel that the synthesizers we're currently using will be rendered completely old by more and more improvement on features and specially quality as the processing capacities of newer computers will allow for greater implementation of more demanding features? And holding that thought, that some of the software instruments we use today will be regarded as classics the same way a Mini Moog is today, like vintage synth's website seems to believe by introducing some of them among their hardware cousins?
And how will our Virus (if that's the case eheh) hold up against more capable software synthesizers? I don't mean future incarnations of the Virus (whenever they decide to show up XD) but the current ones we have?
There is something about most tangible, manufactured items that make them magical and of higher value once they are not longer being produced. When it comes to certain types of items like musical instruments, that "magical value" goes through the roof, vintage keyboards for example.
It won't really matter how well they hold up (musically I mean, not physically), they will still be desirable for nostalgic reasons. I've seen people selling their vintage gear like Jupiter-8 or Prophet V for example because they A/B'ed the Arturia plug-in side by side and found them indistinguishable (the plugin being of course easier to work with). Vince Clark, founder of Depeche Mode, Yaz and Erasure put most of his hardware gear up on eBay the moment soft-synths crossed the equality threshold, and he was known for having one of the best synth studios (partially underground bunker) in the industry.
I think software is already there, it's just that there is still something to be said about a hardware synth that is a self-contained instrument with no dependency on a DAW, PC or additional controller, and hardware synths that are no longer in production have a particular mystique. I'd love to have a Kawai K5 again -- not because it was really a great synth, its considered by many to be the most difficult synth to program ever created, and I never even thought the sound was all that great, but I have a lot of fond memories of music creation with it, so there's a nostalgia value there.
So, I'm not sure synths ever become obsolete until they physically stop working, or in the case of software just fall into the unsupported graveyard (Albino is an example of that which has come up before).
About tablets and there potential... To me the value there is strictly in the mobility of it (very important for some, less important for me). I see a tablet has having all the limitation of a laptop, except much worse. Airflow, thermal limitations and overall computing power do not put them in the league of the hardware I'm currently interested in as a primary means of music creation. But, when you're sitting in a hotel room bored or whatever, pulling out the tablet and tapping out a riff can be satisfying.
TweakHead
21.06.2013, 12:36 AM
Yep, I also feel that software quality has gone through the roof lately. One of the things we don't talk about much here: compressors, equalizers - specially those that you use to color the sound or shape it's tone, rather then surgical stuff; reverb is really much better then when I first started using music software, there's some lush sounding ones out there. Things like Melodyne... I mean, it's just black magic. Now there's also a great audio to midi thing in Ableton that's also amazing. Being able to use such a thing as Monark or Diva - I think it sounds even better then Arturia's Mini Moog V, even though the movement thing on Arturia's can lead to places you can't go with the others... - and earing those filters gives me the chills sometimes. I mean, we're able to do stuff today that is just simply amazing, like having access to tons of high quality samples for drums or whatever we can choose on the fly - I'm thinking Kontakt here - and replace drums as we wish to test things out or whatever. That thing alone puts almost any hardware sampler - even the really expensive ones - to shame, and that's undebatable.
I have to agree with you when it comes to physical objects and instruments holding their value more. The Virus is great to tweak and get inspired, it's not all just about the sound. And there's a tendency for me to explore it more because it's just right there. But it's not just synthesizers. I don't really feel the need to invest in a Neve equalizer or even the Eventide fx unit or something. For such a big buck, I'd rather take my chances with software even though I know this things are still killer but... I feel compressors used to be lame and now they behave like the real stuff, same goes for Equalizers. Which ones do you guys like best here?
For me in modern times, compressors and equalizers all come in the form of software, slapped onto a mixer channel of a DAW -- but no doubt hardware versions have allure of their own. On that topic I'd rather sit back and hear from those in the know than pretend I do...
vBulletin® v3.6.4, Copyright ©2000-2025, Jelsoft Enterprises Ltd.