ARM announces new DynamIQ technology, will focus on AI, machine learning, and phones

By Joel Hruska
For nearly a decade, starting with the Cortex-A8, ARM has been driving ever-higher performance in mobile just as Intel (and to a lesser extent, AMD) once did in the PC space. In less than 10 years, we’ve seen ARM move from 32-bit to 64-bit, roll out vastly more efficient CPUs at the high-end and the low-end, and move from single-core chips to CPU clusters with up to eight cores in a big.Little configuration, with a load balancer sophisticated enough to move workloads on to all eight cores and juggle them according to where they’ll do the most good.
Now, ARM is detailing the next stage in its product evolution. Dubbed DynamIQ, the CPU developer has created a CPU design that expands the idea of big.Little to a much larger array of products, while increasing the kinds of workloads ARM’s new CPUs can handle.

Here’s how ARM describes its own DynamIQ technology:
The introduction of DynamIQ is an evolutionary step forward for ARM big.LITTLE technology which revolutionized multi-core characteristics for our primary compute devices when launched in 2011. DynamIQ big.LITTLE carries on the ‘right processor for the right task’ approach and enables configurations of big and LITTLE processors on a single compute cluster which were previously not possible. For example, 1+3 or 1+7 DynamIQ big.LITTLE configurations with substantially more granular and optimal control are now possible. This boosts innovation in SoCs designed with right-sized compute with heterogeneous processing that deliver meaningful AI performance at the device itself.
Other upcoming features include dedicated processing instructions for AI and machine learning, with an expected 50x boost in AI performance over the next 3-5 years, relative to the Cortex-A73. While that sounds impressive, a large chunk of the improvement would almost certainly come from ARM giving its own CPUs the ability to operate on, say, 8x the 8-bit operations or one 64-bit operation. Toss in some new SIMD instructions, faster memory and caches, and the intrinsic advantages of a new CPU core designed five years hence, and it’s no longer crazy to think ARM could deliver a performance advantage that huge in specific workloads in such a short period of time (subject, of course, to power and thermal constraints).
ARM is also including new multi-core flexibility for spreading workloads to the cores where they are best suited. There are a much wider range of cores that can theoretically be incorporated, and users no longer have to pick between pairs of cores to use the option.

The company also plans to faster task migration, increased efficiency from shared memory blocks, and to implement big.Little in a single cluster rather than splitting these blocks into their own separate clusters.

ARM was light on the specific details of how DynamIQ would be implemented or when we can expect to see it in-market, but the company clearly isn’t standing still when it comes to innovating on its mobile hardware. Like Intel, Qualcomm, Nvidia, and pretty much everyone else, it hopes to drive its products into self-driving vehicles and other robotics over the next decade.

Comments