All interactions with the peripheral are handled through the use of interrupts

The next step is to verify With a multimeter that 5V is found on all 5V traces or leads. The LEDs on the FT232RL should also light briefly as the IC negotiates a connection to the computer. The next test is to unplug the USB power and provide a voltage between 7-36V to the battery connector . If the LED on the 3.3V LDO lights up, that indicates that all the regulators for the PCB are functional. Again, confirm with a multimeter the voltage at the output of each regulator. With the power sources and the FT232RL validated, the next test is to load the Serial.X project using the MPLabX IDE and the serial programmer . The IDE will indicate whether the microcontroller has been successfully identified and the code loaded. If so, verify the 3.3V sine wave at both outputs of the oscillator. Finally, the Serial.X project test harness is a simple loop back to an external computer. Communicating over a terminal program over the correct COM port should echo the input back to the user. From this point on, each peripheral in use is validated with the appropriate driver project loaded and the associated test harness specified in the pre-processor macro configuration of the IDE7 . Evaluating every sensor and peripheral is a time consuming process. A good area for future development is the design of an automated board testing station. Given the future deployment possibilities, such a testing station would be useful for diagnosing and troubleshooting errors or post-crash analysis.

Writing applications for microcontrollers is known as embedded programming as opposed to general programming for larger scale systems. The main constraint is the limited memory available for the program and data—typically on the order of hundreds of kilobytes for most microcontrollers. A main challenge is the hardware-specific nomenclature and configuration of the peripherals. For example, nft channel to enable a UART on the OSAVC microcontroller one first has to configure the peripheral by changing the register bits corresponding to the memory-mapped IO that control its function1 . Fig. 4.1 shows a snippet of the initialization of UART1. Although the logic of the routine is clear, the register names and values are cryptic. Determining the correct settings and hardware registers requires careful study of the microcontroller datasheet. Perhaps the main challenge of writing embedded firmware, however, is ensuring that code is fast and repeatable. The need for speed results from the vehicle control requirement. Most vehicles require control algorithms operating in the tens to hundreds of Hz. Furthermore, the control period itself needs to be repeatable, that is the latency and jitter of the control loop needs to be predictable and small. This arises from the fact that while controller design on a microcontroller is necessarily digital, the vehicle itself operates in the continuous domain. Variations in the period of the control loop introduce extra noise in the form of timing jitter which can make the control less precise in the best case or unstable in the worst case. These constraints and challenges are a significant barrier to adoption of the OSAVC.

In particular, to ensure efficiency and low latency, all the peripheral drivers2 have to be non-blocking, that is, they cannot allow the processor to stall while waiting for data. This requires the drivers to use interrupts to communicate to the peripheral. To minimize some of the difficulty for an adopter of the OSAVC, we have written drivers for many common sensors and devices, developed control application code that can be adapted to a specific vehicle implementation, and incorporated several useful utilities common to most autonomous vehicles. This work is detailed in the next sections.The peripheral drivers are located in libraries in the OSAVC repository. Each driver consists of two files: a header file that specifies the module public methods and a source file. Inside the source file the private variables and functions are declared static to limit their scope to the module. In this manner the code base is modular and similar to object oriented programming. An interrupt is a process whereby the microcontroller receives a signal from a hardware peripheral informing it that a peripheral needs attention, for example, when a new piece of data is received. When the microcontroller receives the IRQ it pauses execution of the main program and jumps to an address in memory pointed to by the interrupt. At this memory location is a pointer to the appropriate interrupt service routine . The ISR is a small piece of code which handles the particular hardware need and clears a flag in the peripheral register to let it know it has been handled. Each interrupt has a priority to deal with the case when multiple interrupts occur simultaneously. To make process this more concrete, the code in Fig. 4.1 is configured to interrupt the processor when a character is received or transmitted . When a new byte is received from the UART it is stored in a register. The UART sets a flag in a control register indicating what caused the interrupt.

The microcontroller then pauses code execution and jumps to the ISR. Inside the ISR, the character is read from the register and stored in a buffer. The flag is cleared and the microcontroller restarts the main code. A snippet from the UART1 ISR is shown in Fig. 4.2. Many vehicle developers will never need to write a peripheral driver if they utilize the existing sensors and devices listed in Table 3.1 from the previous chapter. However, if they need a specific device the existing code base can be used as a template.Finally we have written sample control applications using a state machine approach to simplify the algorithms to the minimum complexity required. A typical architecture for an application is shown in the state machine diagram in Fig. 4.3. In this example all the initializations are performed first, then a simple while loop is entered. At the start of the loop the current time is queried after which the state transition occurs to the next state. The Check Timers state checks to see if a timer has expired. These are the synchronous events we define. For example, we could configure a timer for 10 msec to compute a new control command. If the control period has elapsed the main loop calls the controller update in the Service Timer state. Once the control update completes the state checks the next timer. The Check Timers state is exited once all the timers have been evaluated. Using this structure we can define as many timers as needed, all based on one hardware timer. After the synchronous events have been evaluated, the next state is Check Peripheral Events. These events are represented by boolean values indicating, for example, that a sensor has new data to be processed. These events are asynchronous, meaning that they may occur at any time. If an event has occurred, the state transitions to the Service Peripheral state—a function which services the event. For example, the radio control module receives data over UART5. If a new command string from the radio controller is available, the application calls a function which decodes and stores the radio controller data. These data consist of the various switch and gimbal settings of the controller. After decoding and storing the data, the state transitions back to Check Peripheral Events, which continues until all events have been checked. At any point in the main loop the microcontroller may be interrupted by the hardware peripherals. This is seen as the arrow in the lower left of the figure labeled ‘IRQ’ and representing an interrupt request. When one is detected, the main loop execution is paused and the ISR is performed. Once complete the main loop continues. The architecture presented in the previous section allows for modular controller update functions. A general controller update flowchart is shown in Fig. 4.4. Once the control timer has expired, hydroponic nft the update sequence is called from the Check Timer state. The first action is to reset the timer. Next a function is called to estimate the current state of the vehicle. A state vector consists of all the parameters that define the vehicle’s dynamics . The state vector is sent to the controller which uses it and the desired state to calculate the outputs . These outputs are sent to the actuators and the execution flow returns to the calling state. This architecture demonstrates three great advantages of using the OSAVC. The first is that both the state estimation function and the controller itself consist of two files: a header file and a source file. Thus testing a new algorithm is as simple as replacing two files and recompiling the source code.

The second is that it is easy to evalute the performance of the new algorithms with respect to latency by making use of the system hardware timer. We use this technique to evaluate benchmark results in the next chapter. Finally, this architecture allows algorithms to be developed and tested in Matlab before deploying to the vehicle using the automatic code generation capability of the software. We used this capability to develop two of the benchmark algorithms in the next chapter as a demonstration.In this chapter we discuss a method for characterizing the performance of the OSAVC. The inspiration for this comes from the computer industry which often employs benchmark tests designed to compare the performance of various systems using a common algorithm. Ideally, the benchmark study would compare the performance of the OSAVC against a commercially-available autopilot, for example. However, the difficulty of implementing a novel algorithm onto such an autopilot is one of the main motivations for developing the OSAVC in the first place, namely that it is challenging to modify the source code of these existing devices. Instead, we designed a study that compares four different hardware configurations: the OSAVC, the Digilent UC32 development board, the Raspberry Pi Pico RP2040 microcontroller, and the Raspbery Pi 4b computer hosting a standard Linux OS. More detail regarding these processors is found in Chapter 6.1 where we present the results. The benchmark uses an attitude estimation algorithm developed by Mahoney to evaluate the hardware systems. Attitude estimation is a method to determine the orientation of a vehicle in three dimensional space and is used by most autonomous systems in some form. Attitude estimation algorithms are known as attitude heading reference systems, or AHRS. This particular algorithm uses two inertial sensors—atriaxial magnetometer and a triaxial accelerometer—as well as a triaxial gyroscope sensor to provide the input to a complementary filter. This filter is discussed in detail in a following section, however, fundamentally it measures the orientation of two inertial vectors along with the angular velocity in the vehicle frame to determine the attitude of the vehicle in the inertial frame. This approach is similar to the Kalman filter which is optimal when the dynamics are linear and the noise sources are normally distributed. The complementary filter has been shown to match the performance of the Kalman filter when tuned properly [6], moreover it is much less computationally expensive making it suitable for embedded systems. The parameters of interest in the benchmark are the mean and distribution1 of the latency of the filter. Latency is an important parameter because it dictates the speed of the update rate. For challenging applications, e.g., the stabilization of a quadcopter or similar UAV, the update rate must be as fast as possible to provide the greatest margin of stability. The distribution of the latency is also important because variation in the latency can also affect the stability of the craft. We implemented four different variations of this filter to provide deeper understanding of the performance of these hardware systems. Two implementations use quaternions to represent the attitude in the estimation, one with single precision floating point numbers, and one with double precision. The other two implementations use direction cosine matrices to represent the attitude in the algorithm, again one using single precision floats and one using double precision. These are discussed in greater detail in Section 5.6. Attitude estimation fundamentally measures a dynamic state of a sensor, that is, the orientation and rotation of the sensor in the inertial frame.


Posted

in

by