Did you make any New Year Resolutions? I just had the one: not to make any resolutions so I was on a hiding to nothing before I even got started. I did start thinking about what the message should be around abstract modelling in 2013. Was it more of the same, or should there be new angles to explore?
At DAC last year there was a panel called “System Models - Does One Size Fit All?” Luminaries representing IP, EDA, Silicon Vendors and OEMs set out their wish lists and debated the topic. It was about as heated as you can get at a technical conference with some very polar stances. At the end of the debate it was clear that the viable solution today entailed the provision and use of models that were optimal for part of the design spectrum, useful in other parts and not relevant in others. It was also clear that hardware and models have different characteristics. The more abstract the model the greater these differences tend to be as we strive for higher performance. It’s these abstract models and the Virtual Platforms that are built up with the models that I want to focus on.
In some scenarios differences can be detrimental but equally they can, in other circumstances, be very useful. So when the cry goes up “the hardware and the model are different because (insert difference here)” the answer is not always “change the model”. There should be an understanding that often you can achieve the same goals with the model without doing it the same way as the hardware.
When we talk about the benefits of models, and virtual platforms, we largely focus on availability, incremental delivery and flexibility. Flexibility is the benefit I want to highlight. But before that it is probably worth reiterating the others: it is somewhat of a “given” that the virtual platform will be available before hardware becomes available. Having an environment for early software development offers the potential for shortening development cycles. We have had partners that have adopted this approach report several months saved with very fast bring up of software on the hardware once it is delivered.
Likewise it seems obvious that a virtual platform can be delivered incrementally. Starting with the core subsystem and building out useful work can be done with each delivery offering a more complete and full featured platform. I am not claiming this can’t be done with hardware but it’s a lot less practical.
Incremental delivery could be considered as one type of flexibility in a virtual platform. However, the one that is interesting me most – and has the greatest scope for development this year - is the extraction of data from the platform for analysis and debug. Hardware solutions often support trace and debug access but there are drawbacks. The capabilities are designed in to the hardware with specific ports, registers and trace hardware within the core. This means that the end user has little or no scope to adapt or extend them. If you are assembling an SoC with IP from different sources (and even, regrettably, from the same source) there can be incompatibilities between the implementations. To go with that you need specialised hardware to attach to the design under test to capture the information and then analyse it. If the analysis throws up a requirement that it would be useful to trace something the hardware doesn’t support, what then? Ultimately there is a cost associated with debug and trace in hardware. There is a trade off to be made about how much needs to be designed in to provide useful and usable information without creating an overhead that becomes unacceptable.
In the Virtual World it is relatively simple to redefine what gets traced and when. The model is developed with a range of different trace sources. These are – in general use – dormant. In the case of Fast Models there is a flexible plug-in interface that attaches to the models and activates trace sources as and when needed. The plug-in can then either process the data it extracts to provide useful information or pass it to a downstream tool for visualisation.
Changing the plug-in, or changing the trace sources, is a straightforward process. It’s usually no more that a re-compile to achieve this. Looking at complex dependencies is also simplified. The trace plug-in can be set up to track events that are a combination of several otherwise unrelated trace sources and presenting them in intuitive ways.
This works at two levels: tracing the performance of the model itself and tracing the performance of the software application running on the model. There are also interdependencies between the two: for example if the application is heavily using a peripheral such as an MMC we can see both that activity and how efficient our Virtual Platform is at handling them. The screen shot below is too small a scale to see the detail, however, it does show how this would look in the ARM® Streamline™ Performance Analyzer, part of the ARM Development Studio 5 (DS-5™) toolchain.
The top half of this picture shows various events (exceptions, CP15 accesses, MMC accesses) and Virtual CPU loadings and makes it easy to identify dependencies and bottlenecks. Similar kinds of analysis are also supported by integrations with tools from our EDA Partners. If I’m got getting what I want from this view it’s easy to change the platform, and the data extract, to get what I do need.
Before I slip into making this a product pitch (or technical training) I should come back to the message. Opening up this interface is giving me lots of new insights into the way these complex sets of interrelated properties of the Virtual Platforms interact. But underlying them all is the understanding that models offer methods of control and analysis that are not open to hardware solutions. It’s often necessary to think laterally about how the model can be leveraged in different ways and that taking the approach of modelling/mimicking what the hardware does is not always the best way to achieve the results you are looking for.
Vive la difference!
Fast Model Evaluation: Helping hardware and software designers sleep better
Rob Kaye, Technical Specialist for Fast Models, ARM, recently celebrated 30 years in and around the semiconductor industry. Rob joined ARM 6 years ago and for the last two years has been focused on the modelling solutions. Prior to joining ARM Rob had lengthy spells at Mentor Graphics and Texas Instruments in a wide variety of roles and locations around the world.
0 Comments On This Entry
Please log in above to add a comment or register for an account
ARM Cortex-A57 Test Chip on TSMC 16nm FinFET Process Optimizes Tools & Flows
on Today, 08:48 AM
Seven tips for ARM Accredited Engineer exam success
on Yesterday, 09:22 AM
The Server in Your Hand - and the three new interfaces inside it
on May 09 2013 08:54 PM
A DATE with Computing Destiny
on Mar 18 2013 06:57 PM
CDNLive Paper Preview: RTL Performance Analysis of ARM Interconnect IP
on Mar 11 2013 01:40 PM