Access Virus & Virus TI community since 2002 Virus TI Infekted

Go Back   The Unofficial Access Virus & Virus TI Forum - since 2002 > General discussion > Studio equipment

Studio equipment An area for general discussion about studio equipment, excluding Access products which have a dedicated area.

Reply
 
Thread Tools Search this Thread Display Modes
  #1  
Old 28.04.2005, 03:23 PM
DIGITAL SCREAMS's Avatar
DIGITAL SCREAMS DIGITAL SCREAMS is offline
This forum member lives here
This forum member lives here
 
Join Date: 09.11.2002
Location: United Kingdom
Posts: 2,049
Default Why are computers taking so long to break the 4Ghz barrier?

I can remember back in summer 2000...the big breakthrough '1GHz processors!' During 2000 through to 2003 we went from 1GHZ to 3GHz....pretty damn quickly....but now progress seems to have really slown down. What gives? Is it the heat issues?

Im thinking that computers will need to get to 5GHZ 64bit before softsynths and plugins become really viable in the studio. What do you guys think?

DS
__________________
http://www.youtube.com/user/DIGITALSCREAMS

The SynthWizard has some advice - Back in the 1980's music was better, TV was better, films were better. Not to mention fashion.... Let me help you relive the past with some classic 80's sounds from my vintage synth collection....
Reply With Quote
  #2  
Old 28.04.2005, 04:04 PM
ten's Avatar
ten ten is offline
Veteran
Veteran
 
Join Date: 08.04.2004
Location: Reading, England, UK
Posts: 528
Default

They wont get to 5ghz I doubt, Intel were going to go for 4ghz this year but have given up and are now pursuing dual-core (and quad-core) like AMD.

The reason being they are coming to the technical limitation for raw speed for cpus in there current production methods and simply cannot get it any higher.

Multi-core is the way forward now so over the next few years expect to see dual, quad and even 8, 12 core cpus, but you wont be seeing a singlecore 5 gig chip this side of 2010 if ever.

ten
Reply With Quote
  #3  
Old 28.04.2005, 04:12 PM
Timo's Avatar
Timo Timo is offline
Administrator
This forum member lives here
 
Join Date: 13.07.2003
Location: Kaoss Central, England
Posts: 2,561
Default

Heat.

P4 Prescott's are infamous for their heat problems.

And, if you can't go up, you have to go side-ways. Dual-core etc.

The Pentium-M is the up n coming desktop processor. It works a treat in laptops as it's considerably faster than a desktop Pentium of the same CPU clock speed, the reason being that it's been engineered to be extremely efficient (ie. to give better battery lives. Also disippates less heat as another large positive side-effect.)
__________________
PS > And another thing! Will the Ti|3 have user customisable/importable wavetables? A ribbon-controller or XY-Pad might be nice, too, please! Thanks!
Reply With Quote
  #4  
Old 28.04.2005, 04:12 PM
DIGITAL SCREAMS's Avatar
DIGITAL SCREAMS DIGITAL SCREAMS is offline
This forum member lives here
This forum member lives here
 
Join Date: 09.11.2002
Location: United Kingdom
Posts: 2,049
Default

Thx Ten....

So effectively what speed would a quad core be running at?

DS
__________________
http://www.youtube.com/user/DIGITALSCREAMS

The SynthWizard has some advice - Back in the 1980's music was better, TV was better, films were better. Not to mention fashion.... Let me help you relive the past with some classic 80's sounds from my vintage synth collection....
Reply With Quote
  #5  
Old 28.04.2005, 04:22 PM
ten's Avatar
ten ten is offline
Veteran
Veteran
 
Join Date: 08.04.2004
Location: Reading, England, UK
Posts: 528
Default

Its hard to say. Dual-core, quad-core and any multi-core are very similar to dual cpu, quad cpu setups of today. Basically the software running on them has to be multithreaded or multicpu aware to take advantedge of the additional cores/cpus. 95% of software currently available is not so they wont make any difference at all but as intel and amd are pushing dual/multicore so much this will change drastically over the next 12/24 months.

The only bummer is as also with dual cpu setups dualcore dont offer double the performance for double the cpu. So a dualcore 2gig (2x 2gig cores) wont actually be like having a single 4gig chip, more like 3gig in fact. So you are getting a 50% increase. It gets even less the more cores you add but of course you are still getting a boost and the threads are split among cpus which is less taxing overall.

My new daw system im putting the finishing touches to today (will be up for tomorrow) is dual-cpu but its dual-core ready. So when I finally do upgrade to dualcore cpus it will be like having a quadcpu system which apparently sx3 and nuendo 3 (both of which I will be running, hopefully in 64bit mode by then) can take advantedge of and life will be oh so rosy

ten

ten
Reply With Quote
  #6  
Old 28.04.2005, 05:52 PM
AlexHall74's Avatar
AlexHall74 AlexHall74 is offline
Administrator
This forum member lives here
 
Join Date: 18.10.2004
Location: Florida
Posts: 1,485
Default

OK folks, here is a HUGE thread from the StudioCentral forum.

http://studio-central.com/phpbb/view...872&highlight=

I posted this in its entirety as that forum requires a user ID and PW and I know some people will not sign up.

In short there is some really deep information about microprocessor design and chip limitations so if you have a few minutes check it out; it is well worth it.

The authors are: Kernmount, MikeMcHargue, JeffersMZ, EarthLoop, KM2873, and Nmodi.

Regards,

-Alex
--------------------------------------------------------------------------------------
Interesting results from a recent beta test of the new technology.

http://www.extremetech.com/article2/...1781863,00.asp

I was surprised to see the dual core systems performing poorly. There are a lot of reasons for the performance deficit namely, a Beta BIOS, single-threaded (non multi-processor aware apps), and perhaps a buggy, pre-release CPU.

I guess I was most surprised to see the darn thing behaving like the old-fashioned dual CPU (dual die) configurations. I was expecting better performance on properly multi-threaded apps.

If Intel, and AMD for that matter, wants this technology to succeed they are going to have to deliver the results. The P4 Extreme Edition, Intel's latest CPU which really smokes, is a real champ right now in many performance areas and is particularly good for us DAW users.

So at this point, it looks like the dual core Intels P4s and AMD Athlon 64s are going to have the same market as their older brothers: software developers and media creation folk. Whether they are of value to DAW users will be debatable.

I must say I am a tad disappointed, but this is probably quite premature.



_________________
Ryan Kern-Mount
www.kernmount.com

* My Gear
* My Songs
* Random Link from my Site

Back to top

nmodi
Top Contributor



Joined: 22 Nov 2002
Posts: 1859
Location: Toronto, Canada
Posted: Mon Apr 04, 2005 11:16 am Post subject:

--------------------------------------------------------------------------------

I like my Ahtlon 64 system. Just because it's a 64-bit chip doesn't mean it can't perform well on a 32-bit platform - which it does. And when the software catches up, I won't have to upgrade my hardware right away. Plus it wasn't too much more $$ than the Athlon XP's or Barton's at the time.




Back to top

jeffers_mz
Platinum Member



Joined: 13 Mar 2005
Posts: 165

Posted: Mon Apr 04, 2005 7:41 pm Post subject:

--------------------------------------------------------------------------------

I'd like to see the CPU management processing take place on or very near the CPU, rather than in software.

Developers are not likely to port their software to the relatively small market segment that requires the power of multiple processors, until and unless the majority of the mass market shows the desire.

With distributed processing management taking place on the CPU itself, any software could take advantage of the extra processing power.

Since we're just now seeing two cores on one die, it is likely to a while longer before the CPU manufacturers will be able to accomplish placing essentially three processors on the same chip, one for processor management, and two for processing.

Will it happen?

Maybe.

A big limitation to faster single CPU chips is heat management, which has the same effect on multi-processor cores.

If single core CPU technology can maintain steady increases in throughput, then the added complexity of managing multi core chips doesn't seem to offer enough return on invested research costs.

If single core technology runs into a briock wall, and/or if multicore technology begins offering significantly higher processing rates, then multi-core has a chance.

Either way, it looks to me like this technology is in an interim stage, which makes it hard to assess it's future.




Back to top

kernmount
Top Contributor



Joined: 17 Nov 2002
Posts: 2749
Location: Grass Valley, CA USA
Posted: Tue Apr 05, 2005 12:39 am Post subject:

--------------------------------------------------------------------------------

All good stuff jeffers. Image the heatsink on these babies!

There isn't a real utility for dual processor rigs outside of the heavy development/server niche. It just doesn't make sense for most professionals and consumers.

That's why I was hoping that these single die CPUs would amount to something. You know, lower manufacturing costs, a quicker bus between the CPUs and all of the other benefits of having 2 CPUs tied closely together, etc.

We'll see. If I had to make a prediction right now, I bet we're still a few years off from a practical dual-quad CPU for home/professional use.

I was hoping for a real innovation.



_________________
Ryan Kern-Mount
www.kernmount.com

* My Gear
* My Songs
* Random Link from my Site

Back to top

mikemchargue
Top Contributor



Joined: 02 Jun 2003
Posts: 576
Location: Tallahassee, FL
Posted: Tue Apr 05, 2005 7:49 am Post subject:

--------------------------------------------------------------------------------

I dabble in microprocessor theory and design. I have friends working at IBM and Intel who work in CPU design, so I feel I am qualified to add to this thread. I'll make this as short and easy as possible, but this will still be long and hard to read for a lot of us.

First, with current fabrication processes we seem to be approaching a point where higher clock frequencies in production microprocessors are very difficult to attain. All the major CPU vendors have seen the rate at which the clock speed of their processors rise slow so dramatically that they call it a stall inside the industry. This happened about the same time we moved to a 90 nanometer fab process. At 90 nanometers, the effects of quantum mechanics begin to play a serious role in how these chips behave. There is so little room between traces on the substrate the electrons can actually jump across traces and create a short.

So, we are having lower yields with slower processors at 90 nanometer than we had at 130 on a per wafer. This is unprecedented in the history of the industry. We can't go back to 130, because the industry as always relied on greater circuit density to achieve greater speeds.

So, there are 3 main tracks that can be taken to greater performance.

1. We can simplify the design of microprocessors. Doing so allows them to run at a grater clock rate. Tests by Intel and IBM have shown that greatly simplified cores running at a much higher speed give LOWER performance than current cores in Windows and Linux. That's bad.
2. We can increase the number of cores on a single die and lower the clock speed to compensate for the increased circuit complexity. As I will explain below, this is the primary focus for the industry now for good reason.
3. Do both. IBM and Sony have taken both routes with the Cell microprocessor. The use MANY simplified cores on a single die running at clock rates (over 4 GHz so far). This is what you'll see in the PlayStation 3. This works fine for a brand new platform, but the Cell would be terrible for Mac OS and Windows. It will also be a bear to develop for.

The only apparent way, barring some engineering breakthrough on the fab side, to increase the MIPS per die in a significant and cost-effective manner is to move towards multi-core designs. We're making life easier for the hardware guys and putting the burden on the software developers.

The issue is ILP vs TLP. ILP is Instruction Level Parallelism. This is what we focus on today. Developers write applications as a single thread and the compiler breaks the code into more manageable chunks. They are issued to the processor as a single app, but using techniques like Out or Order Execution and superscalar design, plus using Integer, Floating Point and SIMD execution units at once when the data allows, processors divide the code and do some level of parallel processing.

Thread Level Parrallelism is where apps are coded as independent threads that communicate with each other as needed. This entails 2-3 times the amount of work as writing a "regular" single-thread app. The payout is that this code easily scales across multiple, discreet processor cores and gains a performance boost as a result.

The thing is, very little software can take good advantage of an MP system in the Windows world. Of course, Macs have has multiple CPUs for some time and a lot of professional applications put them to good use (like PhotoShop and Logic). Multicore systems will not yield the performance they are capable of until Microsoft revamps Windows with better MP support, Intel/AMD release compliers more suited to TLP development and application developers rewrite their applications in a multi-threaded manner. That's a lot of work.

The PlayStation 3 is a TLP design. The Xbox 2 is as well, although it just uses multiple CPUs, not a multi-core CPU. The PC is headed toward TLP and the Mac has been for years.

There's always growing pains, but multicore CPUs are the future of the industry.



_________________


Mike McHargue
mike "at" pixelrecords "dot" com
http://pixelrecords.com
http://mikemchargue.com


My Gear:
http://studio-central.com/phpbb/view...?p=77183#77183
Pictures of my studio:
http://studio-central.com/phpbb/viewtopic.php?t=18191

Back to top

mikemchargue
Top Contributor



Joined: 02 Jun 2003
Posts: 576
Location: Tallahassee, FL
Posted: Tue Apr 05, 2005 8:06 am Post subject:

--------------------------------------------------------------------------------

Quote:
I'd like to see the CPU management processing take place on or very near the CPU, rather than in software.



That's what we do now. It's called ILP and it does not scale across multiple CPU cores due to the result dependent nature of a single-threaded app.

Quote:
With distributed processing management taking place on the CPU itself, any software could take advantage of the extra processing power.


That's the approach with cell. A single core, mostly traditional in design, hands off the instructions to SPEs. It still requires work from the developer and the approach would require a full rewrite of Windows and x86 apps.



Quote:
If single core CPU technology can maintain steady increases in throughput, then the added complexity of managing multi core chips doesn't seem to offer enough return on invested research costs.



They can't. How long ago was it that Intel said they'd have a 4 GHz P4 on the market? What about IBM and 3 GHz G5s?

Quote:
There isn't a real utility for dual processor rigs outside of the heavy development/server niche. It just doesn't make sense for most professionals and consumers.


There will be once Intel gets the chips in enough machines that developers write better code!



_________________


Mike McHargue
mike "at" pixelrecords "dot" com
http://pixelrecords.com
http://mikemchargue.com


My Gear:
http://studio-central.com/phpbb/view...?p=77183#77183
Pictures of my studio:
http://studio-central.com/phpbb/viewtopic.php?t=18191

Back to top

km2783
Gold Member



Joined: 17 Mar 2005
Posts: 142

Posted: Tue Apr 05, 2005 9:13 am Post subject:

--------------------------------------------------------------------------------

keep in mind that when the first P4s came out, the high-end P3s and AMDs could still lay it to them pretty easily. But eventually, things are sorted out, software is better suited to it, and the circle completes itself. the new processor comes into it's own.




Back to top

jeffers_mz
Platinum Member



Joined: 13 Mar 2005
Posts: 165

Posted: Tue Apr 05, 2005 6:36 pm Post subject:

--------------------------------------------------------------------------------

Nice to have real world, hands on voice of experience to temper my theoretical musings.

Didn't know the technology had plateaued at 90nm. I see what you mean about code support for multicore processing though.

I think you're right about the ball falling to Intel and AMD.

We didn't get 32 bit support from MS until the chips were already in place, and even then, issues of backwards compatibility made the transition soft and slow, rather than abrupt.

Ever heard of a Sun project called Genesis? I picked up a whiff of it back around 2000, and haven't heard a peep since.

The objective was treating a datacenter as aggregate resources, managed from a central location. Not like SMS, but on a bit and chip level. Instead of this system's RAM or that box's CPU's, all resources were to be globally available for the core programming to allocate as needed. Sort of like what we're talking about here, except instead of usitlzing multiple cores on a single chip, the plan was to make available all the cores on all the chips in the whole datacenter, plus all the RAM, arrays, etc.

Kewl stuff, but it looks like the development took a wrong turn somewhere....at least, it isn't in production yet.

Edit: Forgot to ask my question...I can see electrons jumping 90 nm or whatever, but where does quantum mechanics enter the situation? Are you saying there's not enough room in the various shells and that random violations of the exclusion principle are responsible for the strays?




Back to top

mikemchargue
Top Contributor



Joined: 02 Jun 2003
Posts: 576
Location: Tallahassee, FL
Posted: Tue Apr 05, 2005 7:04 pm Post subject:

--------------------------------------------------------------------------------

Although I have a pretty practical understand of microprocessors, my knowledge of quantum mechanics is much more theoretical and limited. I know that we're getting to the point where conductor traces are atoms wide.

As far as the specific effects of quantum mechanics, you'll have to ask someone who is more into physics than I am. I'm repeating information from Intel's hardware developers extranet. They state that the counter-intuitive nature of quantum mechanics are playing a role at 90 nanometer and will only get much worse as we move to smaller fab processes.



_________________


Mike McHargue
mike "at" pixelrecords "dot" com
http://pixelrecords.com
http://mikemchargue.com


My Gear:
http://studio-central.com/phpbb/view...?p=77183#77183
Pictures of my studio:
http://studio-central.com/phpbb/viewtopic.php?t=18191


Last edited by mikemchargue on Wed Apr 06, 2005 5:18 am; edited 1 time in total

Back to top

jeffers_mz
Platinum Member



Joined: 13 Mar 2005
Posts: 165

Posted: Tue Apr 05, 2005 11:43 pm Post subject:

--------------------------------------------------------------------------------

Fair enough. I'll poke around a bit and see what turns up.

"...counter-intuitive nature
of quantum mechanics..."

Indeed.




Back to top

jeffers_mz
Platinum Member



Joined: 13 Mar 2005
Posts: 165

Posted: Wed Apr 06, 2005 12:00 am Post subject:

--------------------------------------------------------------------------------

Here we go:

"Most scientists agree we will be able to manufacture 22-nm
chips in about 15 years. Beyond that, all bets are off. That
places the terminus of Moore?s law around 2018?give or take a
few years. The next step would be 16-nm process, which would
result in a gate length of five nanometers. At that point, the
source and drain are so close it will become impossible to
predict electron location (quantum physics is, after all, about
probabilities). Spontaneous transmission or ?tunneling? through
the gate is likely to occur. The Heisenberg uncertainty principle
comes into effect and postulates we will have no way of
knowing whether an individual switch will be on or off. In
computer science, ?not knowing? is unacceptable. Thus, we
appear finally to have defined the limits of Moore?s Law."

http://www.technologydecisions.com/b...4/jan04_11.asp


You can tell by his predictions of sub 90 nm technology that this reference is a bit out of date, but he does outline where quantum mechanics enter the picture.

There's also an interesting piece here:

http://www.harvardmagazine.com/on-line/010544.html

outlining some of the recent directions similar research is taking.




Back to top

mikemchargue
Top Contributor



Joined: 02 Jun 2003
Posts: 576
Location: Tallahassee, FL
Posted: Wed Apr 06, 2005 5:17 am Post subject:

--------------------------------------------------------------------------------

Wow, great reads.



_________________


Mike McHargue
mike "at" pixelrecords "dot" com
http://pixelrecords.com
http://mikemchargue.com


My Gear:
http://studio-central.com/phpbb/view...?p=77183#77183
Pictures of my studio:
http://studio-central.com/phpbb/viewtopic.php?t=18191

Back to top

jeffers_mz
Platinum Member



Joined: 13 Mar 2005
Posts: 165

Posted: Wed Apr 06, 2005 8:41 am Post subject:

--------------------------------------------------------------------------------

Good deal, glad you liked 'em. Its a fascinating subject, for me anyway, probably because it is so counter-intuitive.

Kernmount, I'm seeing a liquid cooling system integrated into the chip, sort of like that in an automobile engine. Something has to give, we're running out of room in the case for bigger heatsinks.




Back to top

mikemchargue
Top Contributor



Joined: 02 Jun 2003
Posts: 576
Location: Tallahassee, FL
Posted: Wed Apr 06, 2005 8:52 am Post subject:

--------------------------------------------------------------------------------

Intel has actually been experimenting with on-chip cooling channels for liquid cooling.

Apples' 2.5 GHz G5 is liquid cooled as well...



_________________


Mike McHargue
mike "at" pixelrecords "dot" com
http://pixelrecords.com
http://mikemchargue.com


My Gear:
http://studio-central.com/phpbb/view...?p=77183#77183
Pictures of my studio:
http://studio-central.com/phpbb/viewtopic.php?t=18191

Back to top

earthloop
Member



Joined: 19 Jul 2004
Posts: 31
Location: australia
Posted: Wed Apr 06, 2005 2:15 pm Post subject:

--------------------------------------------------------------------------------

Hi all!

Now, I am no expert in this field (in fact, the term 'neophyte' would be an exaggeration )...but maybe this vague bit of info is relevant???

In Perth, Western Australia where I live, there is an Indian (or possibly he is Sri Lankan??) gentleman who has been working, at a local university, on the deployment of light (laser technology?) to replace some of the physical materials which usually act as conductors for electrical signals in computer chips. In fact, I think it had something to do with light (photons) actually being the 'signal' or 'data' rather than electrons?

Apparently (and this is why I thought it was relevant) Intel tried very hard to get him to relocate to America to work for them on this technology, but he liked the lifestyle here and said no. ( I love that...picture it...Indian accent, to the Intel exec. .."oh no, thank you very much sir, but I am very happy, so I think I will just stay here thank you"

Intel, however, were obviously pretty keen on what he is doing so they are now funding his research here.......I think that is interesting in the context of this thread.

I will try to find out more...he is at Edith Cowan University here. There have been newspaper articles about his work (that's how I heard of it)

Sorry about the vagueness of this, but I see it as a tech. which may transcend some of the limitations of silicon and metals technology? Please correct me if I am way off the mark.

Cheers



_________________
find what you love, and let it kill you!

Back to top

jeffers_mz
Platinum Member



Joined: 13 Mar 2005
Posts: 165

Posted: Thu Apr 07, 2005 5:16 am Post subject:

--------------------------------------------------------------------------------

Up until a year or two ago, those interested were having trouble inventing a switch that could be turned on or off with light. Somewhere in that time frame, I heard that the guys at Bell had resolved the issue.

That means the technology is still either at the early transistor stage, or else back at the early vacuum tube stage. Could even be back at the early solenoid days,I'm not up on the technology enough to judge.

In any event, there aren't any fully optical chips on the market yet, and it may be a while before we see them.

Still, we have R&D tools now that the olden days never even dreamed of, so maybe it will be online quicker than we expect.
__________________
- .... . | -.-. .... --- ... . -. | --- -. .
_____________________
Music is the answer...
ESTP/7w8/Type-A Hostile
Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
Taking a Dump nvisibl Trouble with your Access Virus? 3 09.03.2007 05:38 PM
Computers with 80 Cores in 5 years? Doc Jones Studio equipment 9 01.10.2006 01:00 AM
64 Bit OS and computers... DIGITAL SCREAMS Studio equipment 8 26.05.2006 12:18 PM
I?m taking the plunge! dekenno General discussion about Access Virus 6 23.12.2005 08:50 PM
taking a break from the forum for awhile!! Blank General discussion about Access Virus 6 01.10.2003 01:46 PM


All times are GMT. The time now is 04:33 AM.
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Skin Designed by: Talk vBulletin
Copyright ©2002-2022, Infekted.org