Debugging embedded microprocessor systems pdf




















What is news is that their increasing performance requirements, complexity and capabilities demand a new approach to their design. Fisher, Faraboschi, and Young describe a new age of embedded computing design, in which the processor is central, making the approach radically distinct from contemporary practices of embedded systems design.

They demonstrate why it is essential to take a computing-centric and system-design approach to the traditional elements of nonprogrammable components, peripherals, interconnects and buses.

These elements must be unified in a system design with high-performance processor architectures, microarchitectures and compilers, and with the compilation tools, debuggers and simulators needed for application development. VLIW architectures have long been a popular choice in embedded systems design, and while VLIW is a running theme throughout the book, embedded computing is the core topic.

A guide to using Linux on embedded platforms for interfacing to the real world. For over 20 years, Software Engineering: A Practitioner's Approach has been the best selling guide to software engineering for students and industry professionals alike. The sixth edition continues to lead the way in software engineering.

A new Part 4 on Web Engineering presents a complete engineering approach for the analysis, design, and testing of Web Applications, increasingly important for today's students. Additionally, the UML coverage has been enhanced and signficantly increased in this new edition. The pedagogy has also been improved in the new edition to include sidebars. They provide information on relevant softare tools, specific work flow for specific kinds of projects, and additional information on various topics.

Additionally, Pressman provides a running case study called "Safe Home" throughout the book, which provides the application of software engineering to an industry project. The book has been completely updated and contains hundreds of new references to software tools that address all important topics in the book. The ancillary material for the book includes an expansion of the case study, which illustrates it with UML diagrams.

The On-Line Learning Center includes resources for both instructors and students such as checklists, categorized web references, Powerpoints, a test bank, and a software engineering library-containing over software engineering papers. The Rabbit is a popular high-performance microprocessor specifically designed for embedded control, communications, and Ethernet connectivity. This new technical reference book will help designers get the most out of the Rabbit's powerful feature set.

The first book on the market to focus exclusively on the Rabbit , it provides detailed coverage of: Rabbit architecture and development environment, interfacing to the external world, networking, Rabbit assembly language, multitasking, debugging, Dynamic C and much more! Authors Kamal Hyder and Bob Perrin are embedded engineers with years of experience and they offer a wealth of design details and "insider" tips and techniques.

Extensive embedded design examples are supported by fully tested source code. Whether you're already working with the Rabbit or considering it for a future design, this is one reference you can't be without! The AVR RISC Microcontroller Handbook is a comprehensive guide to designing with Atmel's new controller family, which is designed to offer high speed and low power consumption at a lower cost.

The main text is divided into three sections: hardware, which covers all internal peripherals; software, which covers programming and the instruction set; and tools, which explains using Atmel's Assembler and Simulator available on the Web as well as IAR's C compiler. Practical guide for advanced hobbyists or design professionals Development tools and code available on the Web.

Author : Stuart R. Author : Arnold S. Author : Robert C. Author : Kenneth L. Author : John R. Author : Joseph A. A simple printf statement or your language's equivalent is perhaps the most flexible and primitive tool.

Printing out variable values allows you to discover how your program is operating. Unfortunately, printf is both clumsy to use requiring code changes and recompiling and quite intrusive because it greatly slows execution. It also can produce reams of data that obscure the real problem. During the early stages of developing a custom board, an ICE is indispensable.

There's just no substitute for full control of the processor when you don't trust the hardware. You can start debugging the board as soon as it comes out of reset, allowing you to see everything going on in those first crucial microseconds.

You can debug code in ROM. You can usually ignore the minimal intrusion. However, once the operating system is up, hardware tools sometimes fall short. Hardware emulators aren't the same as the production CPU. They don't do well debugging multiple processes. They aren't nearly as flexible as a full set of software tools. Data monitors show you what your variables are doing without stopping the system.

Data monitors can collect data from many variables during execution, save the histories, and display them in a live graphical format. Operating system monitors display events, such as task switches, semaphore activity, and interrupts. These monitors let you visualize the relationships and timing between operating system events. They easily reveal issues like semaphore priority inversion, deadlocks, and interrupt latency. Profilers measure where the CPU is spending its cycles.

A profiler can tell you where your bottlenecks are, how busy the processor is, and give you hints on where to optimize. Memory testers search for problems in the use of memory. They can find leaks, fragmentation, and corruption. Memory testers are the first line of attack for unpredictable or intermittent problems. Execution tracers show you which routines were executed, who called them, what the parameters were, and when they were called. They are indispensable for following a program's logical flow.

They excel at finding rare events in a huge event stream. Coverage testers show you what code is being executed. They help ensure that your testing routines exercise all the various branches through the code, greatly increasing quality.

They can also aid in eliminating dead code that's no longer used. Find memory problems early Memory problems are insidious. They fall into three main types: leaks, fragmentation, and corruption. The best way to combat them is to find them early. A memory leak arises when a program allocates more and more memory over time. Eventually, the system runs out of memory and fails.

The problem is often hidden; when the system runs out of memory, the failing code frequently has nothing to do with the leak. Even the most diligent of programmers sometimes cause leaks. The most obvious cause is programming error; code allocates memory and fails to free it. Anyone who has traced a leak in a complex piece of code knows it can be nearly impossible to find. Much more insidious and common leaks occur when a library or system call allocates memory that doesn't get freed.

This is sometimes a bug in the library, but more often it's a mistake in reading the application programming interface documentation. They found a leak during their final burn-in testing. The leak slowly ate 18 bytes every few seconds, eventually causing a crash.

There was no indication where the leak was coming from or why it was occurring. Poring over the code for weeks didn't provide any clues. Ten minutes with a leak-detection tool solved the mystery. The call was well documented; there was simply a difference between the two implementations. A one-line change fixed the problem. The Nortel programmers were lucky. Unit and system testing rarely reveal leaks.

A slowly leaking system may run for days or months before any problem surfaces. Even extended testing may not find a leak that only occurs during one high-traffic portion of the code during real use.

In fact, most leaks are never detected, even in fielded systems. In the best case, reliability isn't critical and users learn to reboot periodically. In the worst case, leaks can destroy the customer's confidence, make the product worthless or dangerous, and cause the product or project to fail. Since leaks are so damaging, many methods have been developed to combat them. There are tools to search for unreferenced blocks or growing usage, languages that take control away and rely on garbage collection, libraries that track allocations, even specifications that require programmers to forgo runtime-allocated memory altogether.

Each technique has pros and cons, but all are much more effective than ignorance. It's a pity so many systems suffer from leaks when effective countermeasures are available. Responsible programmers test for leaks. Fragmentation presents an even sneakier memory challenge. As memory is allocated and freed, most allocators carve large blocks of memory into smaller variable-sized blocks.

Allocated blocks tend to be distributed in memory, resulting in a set of smaller free blocks to carve out new pieces. This process is called fragmentation. A severely fragmented system may fail to find a single free 64k block, even with megabytes of free memory. Even paged systems, which don't suffer so badly, can become slow or wasteful of memory over time due to inefficient use of blocks.

Some fragmentation is a fact of life in most dynamically allocated memory. It's not a problem if the system settles into a pattern that keeps sufficient memory free; the fragmentation will not threaten long-term program operation. However, if fragmentation increases over time, the system will eventually fail. How often does your system allocate and free memory? Is fragmentation increasing? How can you code to reduce fragmentation?

In most systems, the only solution is to get a tool that shows you the memory map of your running system. Understand what is causing the fragmentation and then redesign often a simple change to limit its impact. Any code written in a language that supports pointers can corrupt memory.

There are many ways corruption can occur: writing off the end of an array, writing to freed memory, bugs with pointer arithmetic, dangling pointers, writing off the end of a task stack, and other mistakes. In practice, we find that most corruption is caused by some saved state that isn't cleaned up when the original memory is freed.

The classic example is a block of memory that's allocated, provided to a library or the operating system as a buffer, and then freed and forgotten. Corrupted systems are completely unpredictable. The errors can be hard to find, to say the least. Memory protection only stops other processes from cross-corrupting your process.

Protected processes are perfectly capable of self-corrupting their own memory. In fact, most corruption is self-corruption. It's highly probable that the bad memory location is still in the writable address space of the process, and corruption will result. The only way to ensure no corruption occurs is by language support or testing. Optimize through understanding Real time is more about reliability than speed. That said, efficient code is critical for many embedded systems.

Knowing how to make your code zing is a fundamental skill that every embedded programmer must master. The first rule is to trim the fattest hog first. So, given an application with a 20,line module and a line function, where do you start optimizing?

That's a trick question. The line function could be called thousands of times or spin forever on one line. The point: making the code run fast is the easy part. The hard part is knowing which code to make run fast. An example may illustrate this.

Loral was building a robotic controller to investigate possible space operations. It was a complex system combining many devices, complex equations, network interfaces, and operator interfaces. One day, the system stopped working. A seemingly trivial and forgotten change in a controller equation caused the transpose code to run during each loop. The transpose routine allocated and freed temporary memory; since memory allocation is slow, performance suffered.

Optimizing every line of code, replacing all the hardware, switching compilers, and staring at the system forever would never have found this problem.

This is typical; when it comes to performance optimization, the majority of the time a small change in the right place makes all the difference. All the coding tricks on the planet don't matter if you don't know where to look. Performance problems can even masquerade as other problems. If the CPU doesn't respond to external events, or queues overflow, or packets are dropped, or hardware isn't serviced, your application may fail.

And you may never suspect a performance problem. Fortunately, performance profiling is simple and powerful. It will also reveal things you never expected, giving you better overall understanding.

How many times is a data structure copied? How many times are we accessing the disk? Does that call involve a network transaction? Have we correctly assigned task priorities?

Did we remember to turn off the debugging code? Profiling real-time systems presents a unique challenge. You need a profiler when you're running out of CPU. But most profilers use a lot of the CPU themselves. You can't slow down the system and still get an accurate picture of what's going on.

The moral is, be sure you understand the Heisenberg effect of your profiler: every measurement changes the system. We'll close this section with another story. ArrayComm was building a wireless-packet-data base station and user terminal. The product was nearly ready to ship, but it couldn't handle the high-load stress testing. Optimizing didn't help. Timing critical code sections didn't help.

Intense management oversight didn't help, either. In desperation, they profiled the system. In a matter of minutes they found that more than one-third of the cycles were going to a function buried in a third-party protocol stack. Turning off an optional attribute-checking feature that had been turned on for debugging nearly doubled system throughput. The product shipped two weeks later. Don't put needles in your haystack Finding a needle in the haystack is a good metaphor for much of debugging.

So how do you find needles? Start by not dropping them in the haystack. New York: McGraw-Hill, This site uses Akismet to reduce spam. Learn how your comment data is processed. You must verify your email address before signing in. Check your email for your verification email, or enter your email address in the form below to resend the email. Please confirm the information below before signing in. Already have an account? Sign In. Please check your email and click on the link to verify your email address.

We've sent an email with instructions to create a new password. Your existing password has not been changed. Sorry, we could not verify that email address. Enter your email below, and we'll send you another email. Thank you for verifiying your email address. We didn't recognize that password reset code.

We've sent you an email with instructions to create a new password. Skip to content Search for:. Strategies for Debugging Embedded Systems Gregory Eakman The best time to detect bugs is early in the development process. My goals are to: Provide a brief overview of model-based software engineering and implementation of these models 1 Outline approaches for integration testing of model-based software Identify the interesting run-time data and execution points within modeled systems Define alternatives for collecting and manipulating model data at runtime Integrate the instrumentation with test automation Integration testing According to Roger S.

Modeling embedded systems with UML The effective application of UML models to software engineering for challenging applications-especially in the embedded context-requires a development process that will ensure: Models are rigorous and complete The resulting system implementation can be optimized without impacting the models The overall architecture of the system is maintained by the process through multiple releases and requirement evolution To achieve these goals, model-based software engineering employs a translational approach, defined below.

These domains are represented as packages, and dependency arrows show bridges, which are the flow of requirements between domains. A domain can be analyzed, or it can be developed via other means, such as hand-written code, legacy code, generated from another source, imported from a library, and so on. Domain services are methods that make up the interface of the domain.

Since the domains define a complete specification of a single problem space, they can be independently tested, then combined with other domains for further testing Information model: for each domain that is to be analyzed, a UML class diagram is used to define the classes that form the structure of the domain.

By expressing behavioral detail in action language, considerable freedom is retained until the translation phase for how each analysis primitive is implemented, which is critical for optimization DesignDesign is the creation of a strategy and mechanisms supporting the mapping of analysis constructs to a run-time environment. Figure 1: Emulation of domain's target environment Figure 1 shows a test driver, either the DVUI or other program connected to the instrumentation agent, emulating the domain's target environment.

Figure 2: Multi-domain testing Single domain to system testing This test approach is scaleable from one domain to the integration of multiple domains and into system test Figure 2. Back 2. Back 3. Tags: Solutions. Next Adapting PC technology to internet appliances. You may have missed. January 13, Nitin Dahad. January 12, Nitin Dahad. We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits.

However, you may visit "Cookie Settings" to provide a controlled consent. Cookie Settings Accept All. Manage consent. Close Privacy Overview This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website.

We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent.

You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience. Necessary Necessary. Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously. The cookie is used to store the user consent for the cookies in the category "Analytics".

The cookies is used to store the user consent for the cookies in the category "Necessary". The cookie is used to store the user consent for the cookies in the category "Other. The cookie is used to store the user consent for the cookies in the category "Performance". It does not store any personal data. Functional Functional. Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.

Performance Performance. Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. Analytics Analytics. Analytical cookies are used to understand how visitors interact with the website.

These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc. Advertisement Advertisement. Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads. Others Others. Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.

With a traditional account Use another account. Account Deactivated. Account Reactivation Failed Sorry, we could not verify that email address. Account Activated Your account has been reactivated.

Sign in. Email Verification Required. Almost Done. Thank You for Registering. Create New Password. Sign In to Complete Account Merge. Resend Verification Email. Verification Email Sent.

Email Verified.



0コメント

  • 1000 / 1000