The hottest PC technology for 2009

Posted by mr bill | Posted in | Posted on 5:25:00 AM

Lightning-fast components headed for your computer

AMD is moving from four cores to six during the next 12 months
Updated: As one year draws to a close, it's only natural that we all want to know what new technologies are in store for us in 2009.

How many cores will your CPU have? How fast will they be? What will be powering PC graphics, and what type of memory will be the next must-have?

The problem is that top technology firms are already beavering away on products and kit that will supersede next year's star buys. So we set ourselves another task – to discover what's over the horizon.

Is next year's kit part of a bigger trend or a gathering revolution in technology? Read on to explore the PC's near and distant futures.

Intel processors

The most hyped and heralded Intel technology for 2009 is the Core i7 processor, which was formerly codenamed Nehalem. Initial product launches will be aimed at high-performance servers; desktop and laptop versions will appear in the second half of 2009. From information provided at the Intel Developer Forum in August and the results we've seen in our lab tests, we can confidently say that Core i7 is the biggest architectural advance Intel has made since the Pentium 4 launched in 2000.

The architecture is scalable from two to eight cores and offers HyperThreading technology – the same system used in some Pentium and Xeon processors – to permit two threads per core. Energy efficiency is high on the list of benefits too. Rafts of architectural features provide increased performance per core at the same clockspeed (so more performance per Watt), and an innovative 'Turbo' mode works hand-in-hand with an improved sleep facility.

Unused cores can be put to sleep, as could those in Core 2; but whereas Core 2 processor cores still used some power in sleep mode, the unused cores in Core i7 will consume virtually no power. And there's more; to provide additional performance for applications that are not multithreaded and hence not able to take advantage of multiple cores, the Core i7 will boost the clock frequency of the remaining core while still keeping the chip within its design power consumption.

Intel isn't saying exactly how much of a boost the remaining core will get, but there are suggestions that it will be from 3.2GHz (the highest launch speed) to 3.4GHz or even 3.6GHz.

The introduction of Core i7 is an example of – in Intel speak – a 'tock'. Intel's so-called 'tick tock' model involves introducing a new generation of silicon technology and a new processor architecture in alternating years. This means that 2009 will be a 'tick' year and, in particular, it should be the year that Intel introduces its 32nm process. Intel was not willing to comment on whether the new process technology will permit an increase in clockspeed, even if only in Turbo mode.

AMD processors

AMD was not as forthcoming on its plans, initially only telling us that its main focus for 2009 will be on the 45nm technology transition. It's pertinent to note that AMD are a whole generation behind Intel. Despite this initial reticence to talk about futures, Senior Product Manager Ian McNaughton did respond to a couple of questions. His main revelation was that six-core processors will join triple- and quad-core products in the Phenom processor line up during 2009.

Having seen that Intel is looking to boost clockspeeds when some of the i7's cores are idle, we tried to get the lowdown on whether AMD has any plans in this area – or whether the megahertz wars are truly over. Ian's response was not surprising given that AMD hasn't quoted the clockspeed in its part numbers for many years.


"For AMD, it has never been about frequency; it's always been about innovation. Buying a PC based simply on the speed at which the CPU runs is misleading, and users who only focus on that ingredient tend to have a poor PC experience."

Memory

What can we expect from memory in the next 12 months? To find out, we spoke to Mark Tekunoff, Senior Technology Manager at Kingston Technologies. He suggested that memory for desktop PCs will probably be as big as 4GB per module in 2009.

This is as large as server modules are today, but short of the 8GB modules that we can expect in the server market next year. If Gigabyte's new GA-X58DS4 motherboard for the Core i7 is typical of the next generation, we will see six memory slots and hence a massive system capacity. However, 4GB is the limit for most 32-bit versions of Windows, and on the application side, there's a limit to how much memory can be used effectively.

For example, it's been suggested that while the science fiction first-person shooter game Crysis is one of the most memory-hungry applications around, it probably can't use a lot more than 4GB of memory.

In reality, we'll start to see mainstream users moving to 4GB of system memory in 2009, a level that formerly tended to be the domain of high-end gamers and power users with a requirement for content creation.

Regarding the memory architectures, CPU vendors' roadmaps show a migration from DDR2 to DDR3 with the latter being the dominant architecture in 2009. Whereas DDR2 topped out at 1,250MHz (which gave 1,250 million 64-bit transfers per second and twice as many for dual-channel memory), the standard for DDR3 gives 1,600MHz as the highest achievable data rate – although chips up to 2,000MHz are available but not widely supported.

Having touched on dual-channel memory (where two memory modules are accessed in parallel, thereby achieving 128-bit transfers between memory and processor), it's relevant to point out that Intel's Core i7 supports triple-channel DDR3 memory.

In the fullness of time, therefore, we can expect to see 1,600 million 192-bit transfers per second – a staggering 38.4GB/s. However, it's important to stress that the initial Core i7 releases support only up to 1,066MHz DDR3.

Graphics

The other main PC component that has a major impact on performance is the graphics system. An Nvidia spokesperson told us that one of the main technologies for next year will be Stereo 3D. This was launched at Nvision, but won't reach users until next year because it relies on the availability of new 120MHz screens.

Graphic card suppliers have been claiming that their products produce 3D effects for years, but that meant nothing more than high-quality rendering to give a feel for texture (an important element of depth perception). But the 'stereo' tag means that this is true jump-out-of-the- screen 3D. There are lots of display technologies that will achieve this, but the one that Nvidia is promoting will use special polarising glasses to achieve the 3D effect.

The other technology that Nvidia was keen to talk about was PhysX, which they anticipate becoming far more widespread throughout 2009. Essentially a set of physics algorithms, the technology is supported by PhysX-ready GeForce processors.

The first games using PhysX as an integral part of game play are expected to launch during the final two quarters of 2009. It's claimed that it will provide a greater level of reality in areas such as explosions, smoke and fog, and will permit characters to have complex, jointed geometries for more lifelike motion.

Intel also discussed its forthcoming GPU (graphics processing unit) – codenamed Larrabee – at the recent Intel Developer Forum. Expected in 2009 or 2010, the first product will target the personal computer graphics market and support both DirectX and OpenGL. Intel describes Larrabee as 'many-cored', but is unwilling to elaborate on exactly what it means by this phrase at the moment.


Disk drives

Most people don't get too excited about disk drives because constant increases in capacity have meant that disks rarely fill up. But there's more to a disk than its size, and the recent launch of Intel's X25-M solid state drive (SSD) suggests that disks for 2009 and beyond could be very different to those in common use today.

Although the X25-M is not the world's first SSD, the new technology needs the intervention of a giant like Intel to jump-start interest. SSDs have undeniable benefits compared to magnetic disks. Firstly, they're fast. The X25-M offers a sustained data transfer rate of 250MB/s for reads and about a third of this for writes.

Secondly, as SSDs have no moving parts, we can reasonably expect the term 'disk crash' to be consigned to the history books. On the downside, the X25-M isn't cheap and its capacity isn't huge (80GB or 160GB), but – as with all new technologies – it's likely that improvements will come thick and fast. We hope 2009 is the year of the SSD.

Moore's Law continues

Few manufacturers were prepared to reveal their plans for the more distant future, but this didn't prevent us from engaging in a bit of long-range crystal ball gazing – with a little help from the history books. For example, the history of semiconductor development has been characterised by various trends. Surely the most famous is Moore's Law, which states that the number of transistors that can be crammed onto a chip doubles every two years.

This law has held true since 1965, and today's largest chip – the Intel quad-core processor codenamed Tukwila – has in excess of two billion transistors. Trends do run out of steam though; perhaps the most famous fad to do so was that for clockspeed increases.

This will surely be the fate of Moore's Law one day, but the good news is that most experts expect that we will see Moore's Law continue until at least 2022. This means that it's possible that we will see chips with four billion transistors in 2010, eight billion in 2012 and so on up to a staggering 256 billion in 2022.

If the predictions prove true and we do see this number of transistors, how will they be employed? Will it just be more of the same?

The next big challenge

For an independent view of the answer to this question, we spoke to Rudy Lauwereins, Vice President of Embedded Systems Designs at IMEC, Europe's largest independent research centre in nanotechnology and nanoelectronics. "It would be the same," said Lauwereins, "if you assume that the architecture isn't going to change. But a few things give me the feeling that we'll see a drastic change in architecture."

Lauwereins put the current trends into context by taking us through the evolution of processors from the 4004 to the present day. Initially, the increase in transistor count was used to boost the core computing infrastructure.

This meant that the register width increased, as did the number and complexity of the instructions. Because this change occurred in parallel with increasing clockspeeds, problems occurred because the speed of memory struggled to keep up. The industry then entered a phase where extra transistors were used to get data to the processing engine.

The next step concerned the problems caused by increasing power consumption. This caused a halt to increases in clockspeed and required transistors to be spent on power management. The most significant change at this point was the migration to multiple cores. Much of this will be familiar to anyone who has followed the development of processors, and each of these trends is evident in the new Core i7. But Lauwereins' suggestion of how transistors will be used in the future was vastly different to what we've seen historically.
The next phase, he suggested, concerns the variability of the production process at very small feature sizes. If the gate oxide is three atoms thick, one just two atoms thick is hugely different. This would mean that transistors could be used to keep physical variations under control.

As an example, Rudy spoke about how to deal with a chip that was designed for a power consumption of 100W but was experiencing power levels varying from 85W to 150W. The solution is for the component to measure its own power consumption and then adjust voltage and frequency itself to keep its power usage within the limits.

Lauwereins went on to give us a glimpse of an exciting new development – 3D chips. Here, chips are manufactured with copper needles sticking out through the silicon, making contact with the top metallic layer of a chip sitting below. This would provide lots of interconnects between chips.

Since the very first IBM PC, processors have become at least 20,000 times faster, while memory has only become 10 times faster for random access. The reduction in the cell size of memory chips hasn't helped because memory is still connected to a bus that is limited in its width; even with the new triple-channel DDR3, this is only 192 bits.

However, 3D stacking provides thousands of connections between the processor and the memory, with the roadmap showing a doubling every two years. This technology has the potential to remove the memory bottleneck.

Memory and core futures

The static RAM that's used for cache is also due for an overhaul before too long. Lauwereins suggested that it will be harder to continue putting more cache on processors. The Core i7 is evidence of this. It has 8MB of L3 cache, which is a pretty modest increase over the Core 2's maximum of 6MB of L2 cache. But in going to 3D, other types of memory can be used on a different layer.

DRAM, for example, is very fast but only when lots of consecutive memory locations are accessed – so it could be used in combination with other types of memory to provide fast access for every situation. Today, many different memory types are being evaluated and new technologies are starting to appear. They all have different characteristics from what we are used to, and this could affect the way we build processors.

Lauwereins went on to suggest that the increase in the number of cores is a trend that won't continue much further. He thinks that AMD's planned six-core processors and Intel's eight-core Core i7 might be about the limit because of the problem with cache coherency. Solving this problem means making sure that the data is valid in the L1 cache for each core, a task that rises quadratically with the number of cores.

"To solve this problem," Lauwereins told us, "they'll have to write programs in parallel languages, a skill that can't be mastered overnight." So what about Intel's 80-core processor, which was demonstrated some time ago? "It doesn't have shared memory," he explained, "and it had software specifically written in a parallel language."

Finally, we asked what trends we are going to see in terms of decreasing feature size over the next few years and why there's still an interest in decreasing feature size given that the increases in clockspeed that demanded ever-smaller feature sizes have now plateaued. "Today we are at 45nm, we will see 32nm next year, then 22nm and 16nm and they will just continue unless economy puts an end to it," said Lauwereins.

"Cost is the driving force. Historically, the move to the next process technology halved the cost because the area of silicon halved, but today the cost per square millimetre increases from one process to the next so we no longer get a factor of two saving. This means that economics might eventually bring decreasing feature sizes to an end."
Source:http://www.techradar.com

Comments (0)

Post a Comment