Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Reduction of large set data transmission using algorithmically corrected model-based techniques for bandwidth efficiency
(USC Thesis Other)
Reduction of large set data transmission using algorithmically corrected model-based techniques for bandwidth efficiency
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
REDUCTION OF LARGE SET DATA TRANSMISSION USING ALGORITHMICALLY CORRECTED MODEL-BASED TECHNIQUES FOR BANDWIDTH EFFICIENCY by Joseph Daniel Khair A Dissertation Presented to the Faculty of the USC Graduate School University of Southern California In Partial Fulfillment of the Requirements for the Degree Ph.D. in Astronautical Engineering December 2013 Copyright 2013 Joseph Daniel Khair ii “We know that ‘all of us possess knowledge.’ This ‘knowledge’ puffs up, but love builds up.” -Paul the Apostle ~ For Su-Lin ~ iii Acknowledgements Achievements of this magnitude are not attempted alone. Rarely do we find the success of one individual standing alone, but rather, we find it surrounded by the successes of many gracious others that have offered their time and support. This research is the embodiment of these statements. The list is long and the faces are countless, but I must name a few that have shaped and guided my steps along the way. First, I would like to thank Talbot Jaeger and Dr. Lisa Hill. You have both truly inspired me to continue to investigate, try new things, and most importantly, learn. For your time and support I am truly grateful. Talbot – thank you for generously funding the entire test fixture. This research could not have been accomplished without you. I would also like to thank my dedicated committee members, Dr. Erwin, Dr. Kunc, Dr. Flashner, Dr. Wang and Dr. Muntz. Thank you for your patience, time, thoughtful questions and guidance. While working through this research, I have been surrounded by an amazing group of work colleagues for whom I am so grateful. David and Phil – thank you for working tirelessly to ensure this effort remained my top priority while on fellowship. I will forever remember your partnership and value the team’s sacrifice on my behalf. To the many friends that have helped me through this process, you have my sincere gratitude. Your friendship has provided laughter and joy and always delivered timely distraction to keep me sane. I’m thankful for each of you. Andrew – though we are not bound by blood you have stood closer than a brother. Thank you. Finally, and most significantly, I have my family to thank. You are scattered around the globe, and from time to time, would call in encouragement from multiple time zones on iv the same day! I could not have done this without you. To my mom and dad, words cannot express my feelings of appreciation toward you both. I have, at times, tried to express them, but fear that I have failed. Thank you for everything you have done and undoubtedly will continue to do. Better parents no child could ask for and, I am convinced, no one has. And, finally, thank you to my extraordinarily lovely, patient and kind-hearted wife Su-Lin. You allowed me to focus on this, constantly pushing yourself aside and me forward even when I thought I could go no further. This is as much yours as it is mine. I love you. v Contents Acknowledgements iii List of Tables vii List of Figures viii List of Equations xii Abbreviations & Acronyms xiii Abstract xv Chapter 1: Introduction 1 1.1: Concept 1 1.2: Application to Space 3 Chapter 2: Background 10 2.1: General Control Theory 10 2.2: Spacecraft Command, Control and Telemetry 13 2.3: Spacecraft Computing 17 2.4: Model-based Control 19 2.5: Previous & Concurrent Contributions 21 Chapter 3: Goals 24 Chapter 4: Methods 25 4.1: Test Fixture Design 26 4.2: Part Selection & Procurement 31 4.3: Test Fixture Assembly 33 4.4: Hardware Description 34 4.5: Software Description 51 4.6: Test Fixture Characterization 86 Chapter 5: Data & Findings 112 5.1: Analysis Metrics & Definitions 112 vi 5.2: The Model-based Technique (MBT) 118 5.3: The Algorithmically Corrected Model-based Technique (ACMBT) 125 5.4: Findings Summary 159 Chapter 6: Concluding Remarks 162 6.1: Summary 162 6.2: Recommended Future Research 163 References 165 Appendices 169 Appendix A: getEpochTimeFrame() 169 Appendix B: exportGPIO() 170 Appendix C: unexportGPIO() 171 Appendix D: initGPIO() 172 Appendix E: setGPIO() 173 Appendix F: readTemp() 174 Appendix G: BB.pl 175 Appendix H: PID.pl 177 Appendix I: mbtBB.pl 180 Appendix J: acmbt1BB.pl 183 Appendix K: acmbt1PID.pl 188 Appendix L: acmbt2BB.pl 193 Appendix M: heaterControl.pl 199 Appendix N: roomTempRecord.pl 201 Appendix O: dataAnalysisReport.m 203 Appendix P: dataAnalysisReportUpdates.m 208 Appendix Q: modelGenerator.m 214 Appendix R: Sample DTI Calculation 215 vii List of Tables Table 5.1: Summary of Data (Bang-bang Control Data Transfer) 160 Table 5.2: Summary of Data (PID Control Data Transfer) 161 viii List of Figures Figure 1.1: A Basic Control System Schematic 4 Figure 1.2: Local Control (Current Paradigm) 6 Figure 1.3: Remote Control (Unsuccessful) 7 Figure 1.4: Remote Control (Successful using ACMBT) 8 Figure 2.1: The Open-Loop Control System [4] 11 Figure 2.2: The Closed-Loop Control System [4] 12 Figure 2.3: The State-based Control System [4] 12 Figure 2.4: Telemetry Commutation [6] 14 Figure 2.5: Telemetry Commutation with Frame Sync [6] 15 Figure 2.6: Supercommutation [6] 16 Figure 2.7: Spacecraft Computing Architecture Overview [5] 18 Figure 4.1: Algorithmically Corrected Model-based Technique (ACMBT) 26 Figure 4.2: Test Fixture Concept Diagram 30 Figure 4.3: Photograph of Test Fixture Build 31 Figure 4.4: Peltier Cooler Performance at 27 °C [27] 39 Figure 4.5: High Voltage Relay Circuit Diagram [28] 41 Figure 4.6: Resistance vs. Temperature, 10 k Ω NTC Thermistor [30] 43 Figure 4.7: Temperature Sensor Circuit Diagram [28] 44 Figure 4.8: BeagleBone Photograph 46 Figure 4.9: Voltage Divider Circuit 48 Figure 4.10: Completed BeagleBone Proto Cape Photograph (Top) 49 Figure 4.11: Completed BeagleBone Proto Cape Photograph (Bottom) 50 Figure 4.12: Completed Proto Cape mated to BeagleBone 50 Figure 4.13: BeagleBone Expansion Header Mapping (default) [33] 54 Figure 4.14: BB.pl Software Block Diagram 67 Figure 4.15: The PID Controller 68 ix Figure 4.16: PID.pl Software Block Diagram 69 Figure 4.17: Output Data File Example Created by BB.pl & PID.pl 70 Figure 4.18: mbtBB.pl Software Block Diagram 71 Figure 4.19: acmbt1BB.pl Software Block Diagram 74 Figure 4.20: acmbt1PID.pl Software Block Diagram 75 Figure 4.21: Output File Excerpt (Created by acmbt1BB.pl & acmbt1PID.pl) 76 Figure 4.22: acmbt2BB.pl Software Block Diagram 78 Figure 4.23: heaterControl.pl Software Block Diagram 80 Figure 4.24: Output Data File Excerpt Created by roomTempRecord.pl 81 Figure 4.25: roomTempRecord.pl Software Block Diagram 82 Figure 4.26: Temperature Sensor Insulation Housings 95 Figure 4.27: DC power plug soldered to power cooler fans 96 Figure 4.28: Individual power to the Peltier and the rotary fan 97 Figure 4.29: First Diurnal Characterization, Raw data 99 Figure 4.30: First Diurnal Characterization (Zoomed), Raw data 99 Figure 4.31: First Diurnal Characterization, Per-cycle metrics 101 Figure 4.32: Second Diurnal Characterization, Raw data 102 Figure 4.33: Second Diurnal Characterization (Zoomed), Raw data 103 Figure 4.34: Second Diurnal Characterization, Per-cycle metrics 103 Figure 4.35: Third Diurnal Characterization, Raw data 104 Figure 4.36: Third Diurnal Characterization (Zoomed), Raw data 104 Figure 4.37: Third Diurnal Characterization, Per-cycle metrics 105 Figure 4.38: Fourth Diurnal Characterization, Raw data 105 Figure 4.39: Fourth Diurnal Characterization (Zoomed), Raw data 106 Figure 4.40: Fourth Diurnal Characterization, Per-cycle metrics 106 Figure 4.41: Fifth Diurnal Characterization, Raw data 107 Figure 4.42: Fifth Diurnal Characterization (Zoomed), Raw data 107 Figure 4.43: Fifth Diurnal Characterization, Per-cycle metrics 108 x Figure 4.44: Sixth Diurnal Characterization, Raw data 108 Figure 4.45: Sixth Diurnal Characterization (Zoomed), Raw data 109 Figure 4.46: Sixth Diurnal Characterization (Zoomed), Raw data 109 Figure 4.47: Sixth Diurnal Characterization, Per-cycle metrics 110 Figure 5.1: Sample of Analysis Metrics Presentation 118 Figure 5.2: Model Data Collection (Raw Data) 120 Figure 5.3: Model Data Collection (Per-cycle metrics) 121 Figure 5.4: Model Excerpt (Created by modelGenerator.m) 122 Figure 5.5: MBT, Bang-bang (Raw Data) 124 Figure 5.6: MBT, Bang-bang (Per-cycle metrics) 124 Figure 5.7: Cooler Temperature Data Scenarios Pre-Model-Update 129 Figure 5.8: Model Collection, Case 1a, Bang-bang (Raw Data) 135 Figure 5.9: Model Collection, Case 1a, Bang-bang (Per-cycle metrics) 136 Figure 5.10: ACMBT – 1 Algorithm, Case 1a, Bang-bang (Raw Data) 137 Figure 5.11: ACMBT – 1 Algorithm, Case 1a, Bang-bang (Per-cycle metrics) 137 Figure 5.12: Model Collection #1, Case 1b, Bang-bang (Raw Data) 139 Figure 5.13: Model Collection #1, Case 1b, Bang-bang (Per-cycle metrics) 140 Figure 5.14: ACMBT – 1 Algorithm, Case 1b, Bang-bang (Raw Data) 141 Figure 5.15: ACMBT – 1 Algorithm, Case 1b, Bang-bang (Per-cycle metrics) 141 Figure 5.16: Model Collection #2, Case 1b, Bang-bang (Raw Data) 143 Figure 5.17: Model Collection #2, Case 1b, Bang-bang (Per-cycle metrics) 144 Figure 5.18: ACMBT – 2 Algorithms, Case 1b, Bang-bang (Raw Data) 145 Figure 5.19: ACMBT – 2 Algorithms, Case 1b, Bang-bang (Per-cycle metrics) 145 Figure 5.20: Model Collection, Case 2, Bang-bang (Raw Data) 147 Figure 5.21: Model Collection, Case 2, Bang-bang (Per-cycle metrics) 147 Figure 5.22: ACMBT – 2 Algorithms, Case 2, Bang-bang (Raw Data) 148 Figure 5.23: ACMBT – 2 Algorithms, Case 2, Bang-bang (Per-cycle metrics) 149 Figure 5.24: Model Collection, Case 1, PID (Raw Data) 152 xi Figure 5.25: Model Collection, Case 1, PID (Per-cycle metrics) 154 Figure 5.26: ACMBT – 1 Algorithm, Case 1, PID (Raw Data) 155 Figure 5.27: ACMBT – 1 Algorithm, Case 1, PID (Per-cycle metrics) 155 Figure 5.28: Model Collection, Case 2, PID (Raw Data) 157 Figure 5.29: Model Collection, Case 2, PID (Per-cycle metrics) 157 Figure 5.30: ACMBT – 1 Algorithm, Case 2, PID (Raw Data) 158 Figure 5.31: ACMBT – 1 Algorithm, Case 2, PID (Per-cycle metrics) 159 xii List of Equations Equation 4.1: Dew Point Approximation 27 Equation 4.2: GPIO Numeric Representation Calculation 55 Equation 4.3: Steinhart-Hart Relationship 64 Equation 4.4: Conductive Heat Transfer Equation 91 Equation 5.1: FOM – Traditional Data Transmission 113 Equation 5.2: FOM – Model-based Data Transmission 113 Equation 5.3: FOM – Out-of-Tolerance Term (A Term) 113 Equation 5.4: FOM – Largest Excursion Term (B Term) 114 Equation 5.5: FOM – Average Term (C Term) 114 Equation 5.6: Data Transmission Improvement 115 Equation 5.7: Total Bytes Transmitted – Traditional (Bang-bang) 115 Equation 5.8: Total Bytes Transmitted – Traditional (PID) 116 Equation 5.9: Total Bytes Transmitted – Model-based (Bang-bang) 117 Equation 5.10: Total Bytes Transmitted – Model-based (PID) 117 Equation 5.11: OOT average 130 Equation 5.12: ΔC, Local Cooler Temperature Rate of Change 131 Equation 5.13: Φ, Phase Shift Algorithm Calculation 131 Equation 5.14: α, Amplitude Scaling Algorithm Calculation 133 xiii Abbreviations & Acronyms A Amps ACMBT Algorithmically Corrected Model-based Technique ADC Analog-to-Digital Converter AIN Analog Input AM Ante Meridian BB Bang-bang CPU Central Processing Unit DC Direct Current DDR2 Double Data Rate 2 DHCP Dynamic Host Configuration Protocol DTI Data Transmission Improvement FOM Figure of Merit GB Gigabyte GND Ground (electrical) GPIO General Purpose Input/Output HH:MM Hours:Minutes HVAC Heating, Ventilation and Air Conditioner IP Internet Protocol LAN Local Area Network LED Light-emitting Diode MAC Media Access Control MATLAB Matrix Laboratory MB Megabyte MBT Model-based Technique MHz Megahertz MPC Model Predictive Control xiv NTC Negative Temperature Coefficient NTP Network Time Protocol OOT Out-of-Tolerance OS Operating System PID Proportional-integral-derivative PSAS Pressure Sensitive Adhesive Surface RAM Random-access Memory RF Radio Frequency RH Relative Humidity SD Secure Digital SDRAM Synchronous Dynamic Random-access Memory SFTP Secure File Transfer Protocol SSH Secure Shell V Volts Ω Ohm xv Abstract Communication requirements and demands on deployed systems are increasing daily. This increase is due to the desire for more capability, but also, due to the changing landscape of threats on remote vehicles. As such, it is important that we continue to find new and innovative ways to transmit data to and from these remote systems, consistent with this changing landscape. Specifically, this research shows that data can be transmitted to a remote system effectively and efficiently with a model-based approach using real-time updates, called Algorithmically Corrected Model-based Technique (ACMBT), resulting in substantial savings in communications overhead. To demonstrate this model-based data transmission technique, a hardware-based test fixture was designed and built. Execution and analysis software was created to perform a series of characterizations demonstrating the effectiveness of the new transmission method. The new approach was compared to a traditional transmission approach in the same environment, and the results were analyzed and presented. A Figure of Merit (FOM) was devised and presented to allow standardized comparison of traditional and proposed data transmission methodologies alongside bandwidth utilization metrics. The results of this research have successfully shown the model-based technique to be feasible. Additionally, this research has opened the trade space for future discussion and implementation of this technique. 1 Chapter 1: Introduction 1.1: Concept As long as man has existed, communication has been necessary. In its purest form, communication is quite simple; transmission of data from one point to another. The complexity that the human mind has to offer makes the task of sending and receiving large amounts of data seem simple. We routinely sense, processes and transmit large amounts of information as a course of second nature or even habit, sometimes unaware of the task at hand. Sometimes after a long day at work we arrive home and cannot recall a single part of the journey home. Sometimes we are reminded of a conversation that just took place in which agreements were made for things so mundane that we were seemingly on autopilot. What is it about communication, data processing and the brain's ability to prioritize the necessary so effectively and efficiently? This biological example is the inspiration for a new data transmission architecture. An architecture that, like the brain, assesses and evaluates the surroundings, condenses the sensed information and transmits only the necessary data. Similarly, the general idea being presented here, Algorithmically Corrected Model-based Technique (ACMBT), is a model-based data transmission technique, which makes use of updates to keep the model current. These model updates are small and can be transmitted over a data link efficiently (low bandwidth 2 requirements). The goal is to achieve the intended data transmission across the data link while gaining bandwidth efficiency (reduction). Though this general concept has many applications to platforms of many disciplines, a specific avenue for demonstration has been selected for this research. As such, the general concept has become slightly more specific in its description for purposes of demonstration. The particular area of demonstration that has been selected for this research is the area of control data transmission over a data link. So long as the idea of closed-loop local control is viable, all is well. However, if we desire to break the physical co-location between sensor, actuator, and processor (controller) for any reason, large amounts of data must be transmitted across a data link quickly and synchronously (in the current paradigm unless we change the rules of thinking). The idea of model-based data transmission is to replace the data intensive approach with a far more bandwidth efficient strategy. The first step is to create and load a model to the remote system where the sensor and actuator are located, but the real-time processing engine has been removed. This may be a semi-large initial transmission, but due to strategic compression and representation methods of system, performance data can be remarkably reduced in size. Next, the performance of the system is locally tracked and periodically compared against the model using an appropriate comparison technique for the data at hand (relatively light processing requirements). These comparisons may be 3 achieved using single point spot checks, sliding window averaging, FFT comparisons, max and min comparisons, etc. Depending on the determination made as a result of these comparisons, processing is either continued with the stored model, or a model update is requested across the data link. Similar to the comparison technique used onboard to determine real-time (recent) performance against the model, updates are calculated using cleverly devised system- specific algorithms. The updates should be effective in their ability to correct the data set and be efficient to transmit. Once received, the remote system processes/applies the model update real-time and begins processing the new model. 1.2: Application to Space In a world of systems that are increasing in complexity, there is an ever-growing interest in control laws and control theory. Even the most basic of systems make use of closed- loop control laws. Many applications of these control laws have driven additional requirements on communications links. As is normal with state-of-the-art applications, the pressure is on industry to continually expand the requirements (as technology becomes more and more capable) in order to sustain the increasing technological wave [1]. In the area of communications, the desire is for more bandwidth, more robust and efficient modulation schemes, and high power low noise amplification to support these requirements. Given this perpetual requirement creep, some questions require further 4 investigation. What if a new approach to data transmission was developed that could actually reverse this trend? Could a new approach result in the same robust control capabilities, but at the same time reduce the requirements levied on the accompanying communication systems? Let's delve deeper into this discussion by visiting a few basic examples. To understand the basic idea of sensors, actuators and control laws, let us begin by considering the common home thermostat. See Figure 1.1 for a schematic representation of this basic concept. Figure 1.1: A Basic Control System Schematic There are three basic components to this system: sensor, controller and actuator. These components work together to sense the environment, process the data and adjust the temperature. The rate at which the sensor measures the temperature and sends data is called the sample rate [2]. Sensor (Thermometer) Heater may change the temperature, cycle iterates Data Commands Actuator (Heater) Controller (Microchip) Sensor On-board Computer Actuator Transmitter Receiver Processing & Computation Up: Additional commands as necessary Down: Raw Data from sensors, Status from Actuators, Status from On-board Computer Remote Position (Ground or Other Node) 5 Let’s continue to build on this example, and apply this concept to space systems. Every satellite has thermal control systems in place. These systems can be broken down into active and passive systems. Although small satellites may only need passive thermal control systems (blankets, reflective shielding material, etc.) most large and complex satellites make use of active thermal control systems [3]. These active thermal control systems operate much like the thermostat example above. Temperature sensors called thermistors read temperatures and pass this information to control laws kept in an onboard computer. The computer makes decisions based on this information and sends command signals to heaters, as necessary. Typically, these thermal control laws are very simple algorithms that have predefined thresholds used as triggers for either turning a heater on or off, as appropriate. The sensed information (in this case temperature) and the actuator status (in this case the heater on/off status) is transmitted to the ground at some cycle rate (not always the same as the onboard sample rate of the data by the sensor). This data is often reviewed by ground personnel and evaluated in the event that additional action is warranted. This example demonstrates the current paradigm of local control. This term, local control, refers to a control system containing sensors, control laws, actuators and all other necessary parts of the control loop locally. In this approach, a lot of information and status data (raw sensor information, actuator status, control law status, etc.) is typically transmitted over the communications link, although most core control processing data is maintained and processed onboard (see Figure 1.2). 6 Figure 1.2: Local Control (Current Paradigm) As was mentioned in the previous section, all is well so long as this control (local) and transmission (minimal) system is maintained. However, what if we desire to disrupt this system for some reason? For instance, what if we desire to include large amounts of non- real-time data from another sensor in our satellite’s onboard processing? With current space hardware limitations this becomes challenging. A potential approach to consider is the creation of a control system that operates remotely. One way to accomplish remote control might be to take the onboard computer containing the control laws and move it away from the onboard system. This would leave only the sensors and actuators onboard. This implementation removes the strict limitations of onboard processing which is definitely an advantage. However, this implementation also adds a stringent Sensor (Thermometer) Heater may change the temperature, cycle iterates Data Commands Actuator (Heater) Controller (Microchip) Sensor Onboard Computer Actuator Transmitter Receiver Processing & Computation Up: Additional commands as necessary Down: Raw Data from sensors, Status from Actuators, Status from Onboard Computer Remote Position (Ground or Other Node) 7 requirement on the communications link. This link is now vital in transmitting sensor data as well as all necessary commands to act on that data as determined by the controller (now processing remotely). As such, the communications link must now be even faster and more robust than before. So, this doesn’t achieve our goal of reducing the communications link requirements. This implementation can be seen in Figure 1.3. Figure 1.3: Remote Control (Unsuccessful) Another approach to remote control could be achieved by first making a few assumptions. First, assume that computer processing on the ground is infinitely capable. In other words, assume that computing potential and capabilities at the ground terminal are more than sufficient for the task at hand. Second, assume that with this processing power, we can create a model of any controlled system. It is worth noting that 8 the goal is to reduce the system reliance on a communications link. With these assumptions, consider the following approach (Figure 1.4). Figure 1.4: Remote Control (Successful using ACMBT) In this approach, the model of the control data is transmitted across the data link. Rather than the sensor data being processed by a control law each cycle, the model of the data continues to operate the system as it has been programmed. If and when the incoming sensor data exceeds a threshold of validity for the model, a signal is sent from the onboard model to the ground-based model interpreter. The model interpreter is an identical copy 9 of the model being run onboard, and is quickly sampled by the infinitely capable ground processing using this signal to reconstruct and update the ground-based model, as appropriate. The new model is then transferred to the onboard model and processing continues as normal. With this approach, actual sensor data need not be exchanged. Rather, model parameters such as coefficients, error flags, urgency flags and possibly heartbeat signals may suffice. This results in vast savings in communication link bandwidths. Additionally, this approach greatly reduces the necessity for constant communications, and replaces this notion with a concept of periodic updates. 10 Chapter 2: Background The basis for this research is an understanding of general control theory, the current state of spacecraft command, control and telemetry, and spacecraft computing. As such, these topics will be quickly discussed here. The current approach in these areas forms the paradigm of spacecraft architecture, as we understand it today. It is important to review and understand this paradigm before discussing a new approach. As these topics have been thoroughly discussed elsewhere, many references have been utilized and the forthcoming discussions follow them closely. 2.1: General Control Theory In almost every facet of engineering, science, and even human behavior, we find systems in need of control. In fact, early attempts at control date back to 300 B.C., and were documented in publications such as Pneumatica as early as the first century A.D. [4], including references to complicated machines capable of automated movements. Though simple in their early applications, control systems have continued to develop and have now become an integral part of modern technology. In 1997, sophisticated control systems allowed for the first autonomous rover vehicle, Sojourner, to explore Mars [4]. A control system can broadly be described as components working together to provide a desired system response. The most basic of control systems employs only one main 11 component, an actuator. A common, simplified picture of a generic open-loop control system is shown in Figure 2.1. Figure 2.1: The Open-Loop Control System [4] In this structure, no attention is given to the error (residual) that may develop due to system variables or control inaccuracies. As one might imagine, the open-loop approach can lead to problems when considering disturbances, stability and sensitivity (to name only a few). As such, the idea of closed-loop (or feedback) control provides a more robust framework from which system control can be established. The closed-loop control system can be described (generally) in three distinct parts: Sensor, Controller and Actuator. Unlike open-loop control, feedback control considers the effectiveness of the control system by measurement, and uses this information to inform and adjust future control decisions. The advantages of the closed-loop feedback system have made it the foundation of modern control efforts. A common, simplified picture of a generic closed- loop feedback control system is shown in Figure 2.2. Actuating Device Process Desired output response Output 12 Figure 2.2: The Closed-Loop Control System [4] In order to appropriately design a control system, a series of steps are required. These steps involve defining the system, making assumptions, identifying a mathematical model of the system, and finally, solving the differential equations that define the model. Control systems have traditionally been summarized mathematically using Laplace transforms and transfer functions. This approach is often referred to as Classic Control Theory [4]. However, more recent treatment of the same theory has been dubbed Modern Control Theory and uses a state space representation in place of transfer functions and transforms. In Modern Control Theory, the main mathematical pieces are state vectors and differential equations represented as matrices. The basic concept behind this approach is depicted in Figure 2.3. Figure 2.3: The State-based Control System [4] Actuating Device Process Desired output response Output Comparison Measurement Dynamic system state x(t) u(t) Input y(t) Output x(0) Initial Conditions Output 13 2.2: Spacecraft Command, Control and Telemetry A viable command and telemetry system is essential to the operation of a satellite. This system interfaces between the ground station and other spacecraft units and subsystems. It is responsible for all incoming and outgoing data transmission including payload mission data transmission [5]. Generally speaking, the command and telemetry system is responsible for receiving all commands, collecting all telemetry, formatting commands and telemetry appropriately, and finally, transmitting these packets to the ground or other spacecraft units. Design of this subsystem can be complex and challenging, as it must ensure reliable data transmission while keeping bandwidth requirements in check. There are entire areas of engineering research focused solely on radio frequency (RF) communications, error correction codes and efficient modulation techniques. The telemetry stream begins with sensors that measure a physical attribute and transform the measurement to an engineering unit value. The particular measurands can range from temperatures to pressures, switch positions, valve firings, and more. The collected data must then go to two locations. First, the data must be sent to the onboard computer for processing and adjudication. Second, the data must be sent to the ground for presentation to the ground support team. Establishing individual communications streams to the ground station for each measurand is not practical. Hence, the parallel data (coming from many different sensors distributed throughout the spacecraft) is synchronously read into a unit that commutates the various packets of telemetry. This 14 commutator is responsible for time-multiplexing (serially) each packet into a single stream of pulses, each with a voltage relative to the respective measured channel [6]. This process is depicted in Figure 2.4 for clarity. Figure 2.4: Telemetry Commutation [6] One complete revolution of the commutator equates to a frame of telemetry. Each location in the serial stream contains the data of the desired sensor. It is important to note that only the data from that sensor is being transmitted. The name, or pneumonic, associated with each piece of data is not included in the stream. To help distinguish the origin of each piece of telemetry in the stream, a frame sync pattern is typically added at the end of each frame (see Figure 2.5) to serve as a reference [6]. 15 Figure 2.5: Telemetry Commutation with Frame Sync [6] Determining the appropriate position in each frame for sensor data to be placed is a challenge. Traditionally, there has been a tension in deciding just what information must be transmitted and received in order to maintain a viable spacecraft control without imposing undue burden on the communications link requirements. Ideally, each measurand should be sampled at a high enough rate that behavior of interest can be captured [7]. However, sensor data must not be sampled too quickly as this will result in less space of other measurands to be sampled and/or a larger transmission bandwidth requirement (undue burden on the communications link). A delicate balance must be achieved in order to meet system performance requirements while being mindful of 16 communications system capabilities. To this end, a concept called supercommutation is often employed. Supercommutation is the practice of placing a particular measurand in the telemetry frame multiple times [6]. This concept allows accommodation for measurands that require higher sample rates while still allowing room, as appropriate, for other measurands. This approach is very common and is depicted pictorially in Figure 2.6. Figure 2.6: Supercommutation [6] This general description of spacecraft telemetry processing demonstrates the complexity with which sensor data is collected, read and transmitted in today’s paradigm. 17 In a similar fashion, commands are carefully constructed by algorithm in ground software, bit-by-bit as appropriate, and transmitted to the spacecraft. Of course, addressing schemes are necessary to decode the proper destination for each command. Each spacecraft unit is assigned a routing address, and commands are sent on the spacecraft command bus to the correct units for processing and execution. 2.3: Spacecraft Computing Collecting data from various spacecraft sensors is only part of the control loop. The collected data must be quickly and accurately processed in order to ensure that actuators are exercised properly. This processing is performed on the spacecraft (mainly) and at the ground station, which is processing received telemetry [5]. A good pictorial representation of the overall computing architecture (space and ground) required to successfully operate a spacecraft is shown in Figure 2.7. 18 Figure 2.7: Spacecraft Computing Architecture Overview [5] Delving a little bit deeper, we now turn our attention to a more detailed understanding of the nature of onboard computer processing. Today, computer processing has grown to touch nearly every spacecraft subsystem one way or another. However, the degree to which computer processing is required varies between subsystems. Subsystems like power and thermal management typically require very little computer processing. In contrast, the attitude control subsystem, mission support subsystem and navigation control subsystems are examples of those requiring a higher level of onboard processing 19 (traditionally) [5]. These tasks are accomplished using complicated mathematical constructs, and hence, require the majority of available processor time. In general, a spacecraft computer is responsible for receiving sensor information, processing that data through pre-programmed algorithms, and generating the appropriate instructions for spacecraft actuators based on the results. Relying on onboard processing for the majority of these real-time computations comes with its challenges. First, due to the strenuous space environment (e.g., radiation hazards) and its effect on electronics, onboard processing is usually restricted in capability [8]. Second, aggressive requirements for onboard computing matriculates through the entire spacecraft and increases requirements on the power subsystem, increases launch weight and therefore booster requirements, etc. These realities limit the onboard computer and hence push the need for higher command and telemetry data transmission between the ground and the satellite. 2.4: Model-based Control Critical to this investigation is the idea of model-based data transmission. As discussed previously, it is clear that the growing demands and requirements on spacecraft control (due primarily to increasingly ambitious mission goals) result in a need for more robust communications. As stated in the introduction, a goal of this research is to reduce the burden on communications systems. At first glance, it seems that achieving more 20 sophisticated spacecraft systems, while also reducing the communications burden are mutually exclusive goals. As such, it is critical that the communication methods and processing constructs (reviewed above) are efficient in the way they meet their objective. Algorithmically Corrected Model-based Technique (ACMBT) of data transmission is an idea that challenges the current paradigm of spacecraft communications and control. Currently, sensor data is processed at a relatively high rate by the onboard computer and data handling subsystem in order to determine the appropriate instruction set for control actuators [5]. Typically, this means that a large amount of data is also being transmitted. These transmissions include sensor data, actuator command echoes, actuator status information, and more. ACMBT replaces this approach with the novel idea of defining the system being controlled by a model of the system. Similarly, remote- computing systems (ground-based or even other space platforms) can operate off of a model interpreter that works in perfect harmony with the model onboard the spacecraft. The attraction to the model-based approach is that it allows for the possibility of sufficient system control while requiring less data processing and transmission (when ACMBT is applied to the area of control data transmission). In order to discuss this idea on a simple (and easily understood) level, consider the basic thermal control circuit discussed previously. Assume that the sensor, in this case the thermistor reading onboard temperature data, is represented by a curve. This curve 21 could be based on past data (empirical), theoretical knowledge of the spacecraft and environmental variables, or current data combined with predictive analysis to help determine the future shape of the curve. Allow this curve to serve as the model for the thermal circuit. Now, this model can be represented mathematically and stored in a variety of ways onboard. Let’s consider the usefulness of a polynomial fit in this case [9]. Assuming the curve can be well represented by a polynomial of sufficient order, this model (coefficients only) is easily and efficiently transmitted. In this example, the remote system, with more computing power than the spacecraft being controlled, can iteratively process a new model based on many inputs and occasionally (if necessary) transmit new coefficients to the spacecraft, resulting in significant bandwidth savings. This idea and its particular advantages in application will be discussed in more detail in the coming chapters. 2.5: Previous & Concurrent Contributions Models and model-based architectures can be found in a variety of engineering applications today. When reviewing previous work in this area, two topics must be explored. First, the use of models for the sole purpose of data transmission must be investigated. Second, the use of models specifically in the world of control theory must be reviewed. 22 A literary review in the area of model-based data transmission revealed concurrent research being performed at the University of North Dakota, Department of Computer Science. This work focuses on identifying discrepancies between an imagery model and imagery data, allowing bandwidth savings by transmitting only the discrepant data to mission scientists from orbiting sensors. Additionally, the research performed by Straub states that savings in bandwidth efficiency are exchanged for increased consumption of onboard processing resources [10]. The research presented here in some ways builds on that which has been demonstrated by Straub, but differs in others. Straub applied model update techniques to collection or archival data sets (captured images). The work being presented here (ACMBT) is a broader look at the general concept of model-based data transmission. In this study, model-based data transmission is reviewed in the context of dynamic data sets, which are ultimately used for adaptable system control. Furthermore, the approach described by Straub increases the computational requirements onboard the remote system, whereas ACMBT (by virtue of its implementation) does the opposite and reduces the onboard computational complexity, as will be shown. A literary review in the arena of control theory proves that model-based architecture has been considered. Specifically, the idea of model-based design has been explored before [11]. One of the significant challenges in designing a control system is in the software 23 and algorithm design. In recent years, complex systems have been modeled using software so that control algorithm could be exercised against these plants efficiently and conveniently. This process would help facilitate the iterative process of control algorithm design [12]. As discussed, model-based design is a topic that has been approached before. However, extending the benefits of this idea or philosophy to actual real-time control is an area that has received less attention, though there has been some related effort on this front. Specifically, model-based fault detection has been proposed and even successfully implemented [13]. In this implementation, models are used real-time to identify faults or deviations in the plant behavior due to unanticipated hardware malfunction [13]; however, this information is not used to actively control the plant. Possibly the closest use of a model to drive a plant is found in Model Predictive Control (MPC). MPC strives to relate independent and dependent variables in and surrounding a plant such that future performance of the plant can be iteratively predicted with residuals of decreasing magnitudes [14]. Similar to a Proportional-integral-derivative (PID) controller, a MPC requires historical knowledge of prior actuator outputs as well as the current status of the system. 24 Chapter 3: Goals The idea being presented in this experimental research is a unique implementation of model-based data transmission. This particular implementation combines the following features: 1. Model-based architecture for data representation and usage 2. Model compression for efficient transmission 3. Algorithm correction of model real-time based on system performance Though the implementation of such a technique applies to a variety of complex systems, the goal of this research is to explore the usefulness of this principle to space systems. The specific goals are listed below: 1. Identify an area with space application to demonstrate the proposed technique 2. Design a test fixture for adequate demonstration 3. Build the proposed test fixture 4. Develop all the necessary software required to interface with the test fixture (hardware) and demonstrate the proposed technique 5. Run a series of tests that compare the fidelity and transmission requirements of the new approach to a traditional data transmission scheme 6. Present all data and findings 25 Chapter 4: Methods Testing a new idea destined for use on a space vehicle is rarely straightforward. Although there are instances in which new technologies have been tested on a space platform, those technologies have typically first been thoroughly characterized in a laboratory setting [15]. As such, it is important to be able to create a situation or environment that will allow the verification of concepts destined for space without ever leaving Earth [16]. To validate the idea of an alternate command and control methodology, the first challenge is to identify and then create a plant that can be studied and acted upon. For the purpose of this doctoral work, the area of thermal control was selected to demonstrate the concepts, algorithms and feasibility of this proposed approach. In general, the test fixture described will be used to demonstrate that a model-based data transmission technique is indeed viable. The following block diagram (see Figure 4.1) represents the overall architecture being proposed. This approach will be referred to as the Algorithmically Corrected Model-based Technique of data transmission (ACMBT). Control data will be transmitted in model form over a data link, in place of individual sensor and relay data. The mechanism used in maintaining control of the plant on the other side of the data link is the received model of the plant. The model works in a normally open-loop fashion to provide system control. Periodic updates to the model from a remote platform (when and if necessary) create a quasi closed-loop scenario, improving model fidelity and hence, control performance. 26 Figure 4.1: Algorithmically Corrected Model-based Technique (ACMBT) 4.1: Test Fixture Design As with all design projects, this test fixture required much iteration. For traceability of the thought process involved in this design, the discussion will focus on the top candidates and rationale for the final design selection. The original test fixture design was a large aluminum plate (30.5 cm x 30.5 cm x 2.5 cm), cooled by a large thermoelectric cooler (Peltier device). The intent behind cooling the 27 plate was to thermally dominate the environment that the experiments were to be run in. Above the cooled large aluminum plate, two identically designed heaters would be placed, with appropriately positioned temperature sensors. As the plate’s temperature decreased, the temperature sensor readings would also decrease accordingly. The heaters would then be toggled on and off to raise and control the local plate temperature. This fixture design would allow for a side-by-side comparison of various data transmission techniques using identical environments and components. Unfortunately, this design had one major flaw. It became apparent that anytime the temperature of the plate or the surrounding air dipped below the dew point, condensation would form introducing undesirable moisture to the testing environment. Condensation can and will form at temperature much higher than freezing. For relative humidity values above 50%, an approximation of the dew point can be made using only the ambient air temperature and humidity as shown in Equation 4.1 [17]. This approximation is accurate to ±1 °C. 𝑇 !" ≈𝑇− !"" ! !" ! (4.1) In Equation 4.1, T dp is the temperature at which dew forms, T is the ambient air temperature, and RH is the relative humidity. Evaluating this simple equation for the typical temperatures experienced in Southern California throughout the year proved that 28 the dew point would likely be reached just by cooling the plate a few degrees Celsius [18]. Since the test fixture would be maintained indoors and would make use of electronic components that are not compatible with moisture, a solution was required. To address this significant issue, the following solutions were proposed (no particular order): 1. Perform the experiment in the absence of air a. Use a vacuum chamber b. Build an enclosure for the fixture that is fairly airtight (not perfect) and pump some other gas into it (e.g., Nitrogen) to create positive pressure and vacate the air 2. Insulate the exposed surfaces of the plate with insulation to mitigate the dew created 3. Use waterproof electronics a. Find waterproof temp sensors, heaters, and cooler/fan b. Find a safe way to collect/drain the moisture (since the experiment is conducted indoors) 29 4. Flip the experiment a. Rather than cool the plate to below ambient and use top mounted heaters to perform local heating, do the opposite b. Heat the plate to temperatures above ambient using a large heater mounted below the plate and use small coolers to perform local cooling Options 1 and 3 may have worked, but would have required substantial additional effort. Option 2 may have helped ease the severity of the underlying root cause, but probably would not have dealt with the issue completely. Option 4, however, completely addressed the issue. By flipping the experiment, the base operating temperature of the fixture was raised well above the dew point. Hence, the experiment was designed so that even with the coolers running, all temperatures were well above the dew point so as to avoid the creation of condensation. Proceeding with Option 4 required a complete redesign of the test fixture. Though many of the components were similar if not identical, the way the components were used needed to be changed. Once the decision was made to proceed with this design, parts were specified and selected. An overall conceptual diagram of the final design is shown in Figure 4.2. 30 Figure 4.2: Test Fixture Concept Diagram A photograph of the test fixture, though not wired, is shown in Figure 4.3. In addition to the lack of wiring and interconnectivity, the BeagleBone boards (blue boxes) depicted in the conceptual drawing (Figure 4.2) are not shown in the photograph of the test fixture. Also, note that the photograph is rotated when compared to the conceptual drawing. 31 Figure 4.3: Photograph of Test Fixture Build 4.2: Part Selection & Procurement Having the support of a team of professionals from the aerospace industry was critical to the test fixture design process. Using the vast experience gained from long careers in industry, this group of multidisciplinary individuals was extremely helpful through the system engineering challenges that accompanied the design of the test fixture. Once the basic concept was determined, and a drawing was complete, a bill of materials was created listing the raw items, components, parts and the associated quantities of each 32 that were needed to build the test fixture [19]. The cost of each individual item was included on the bill of materials. Note that as a general rule, items less than $50 in individual cost were ordered at double the necessary quantity. For items over the $50 threshold, one additional unit was ordered. The total cost of all the items purchased was approximately $2,000. Producing the bill of materials was an eye opening experience. Several times through the design process, difficulties were encountered in sourcing a product that was previously available. Parts presumed available were not, and needed to be replaced by similar (but not identical) components. Occasionally the part needed was quickly identified, but the cost was very high. All of these challenges provided a wonderful opportunity to learn about the system engineering process [20] and the necessity for minor redesign without compromising the overall goal of the test fixture. After months of planning, coordination with suppliers and distributors, and hands-on visits to workshops, the bill of materials was complete. The fixture drawings and bill of materials were presented to a team of industry professionals (who graciously donated their time). Final concerns were discussed and minor adjustments were implemented to the design and bill of materials. Having final sign-off from this multidisciplinary team of industry professionals on the design, materials and the proposed budget, the components were ordered. The parts arrived over a six-week period. 33 4.3: Test Fixture Assembly Once the parts arrived, the building and assembly of the test fixture began. This process presented many learning opportunities, as the techniques necessary for the build were applied. In general, the assembly process for the test fixture included the following tasks: 1. File edges and smooth faces of the large aluminum base plate 2. Measure and mark aluminum base plate for cooler mount points and top- mounted heater placement 3. Create a template for cooler mount point placement 4. Drill and tap aluminum base plate for cooler mount points using template 5. Drill ceramic tiles using template 6. Measure and cut small aluminum plates to be seated between ceramic and cooler 7. Mount (adhere) large under-plate heater to the bottom of the aluminum base plate 8. Mount (adhere) small top-mounted heater to the center of the top of the aluminum base plate 34 9. Mount ceramic tiles, small aluminum plates and cooler assemblies to the base plate using thermal joint compound (Type 120 silicone) between each layer 10. Remove small thermistors from each temperature sensor 11. Solder long-lead thermistors to the temperature sensors 12. Solder voltage divider networks for temperature sensors to each BeagleBone cape 13. Create long signal and power leads to connect BeagleBone capes to test fixture components 14. Create power supply connection cable for each power supply 15. Mount temperature sensors to the base plate and glue (thermal epoxy) each thermistor in place to appropriate surface for measurement 16. Mount all relays to the base plate 17. Trim, crimp and solder all electrical leads and power supplies 18. Connect all components electrically as necessary 4.4: Hardware Description The conceptual diagram shown previously (Figure 4.2) is a representation of the test fixture used to demonstrate the new proposed approach to data transmission. This test fixture can be described as a platform that provided two identical units in need of control. These units resided in an environment that was subjected to the same conditions 35 simultaneously. The base of the test fixture was a plate of aluminum that sat on an edge- to-edge heater. This heater was capable of elevating the plate’s temperature well above the ambient temperature of the surrounding air. Located at each corner of the test fixture resided a ceramic offset which provided a small amount of thermal isolation for the units above. The units above were comprised of four individual components: a small piece of aluminum, a thermoelectric (Peltier) cooler, heat sink and a rotary fan. As the temperature of the plate was elevated, the temperature of a small piece of aluminum above the ceramic also increased. A temperature sensor mounted to the small piece of aluminum captured this change. The cooler could then be turned on in order to reduce the temperature of that small piece of aluminum. Note that the ability of the cooler to remove heat from the small aluminum plate was facilitated by a rotary fan blowing air through the heat sink blades as heat from the cooler was conducted upward to the heat sink fins [21]. Temperature sensors were strategically located in various positions on the plate as well as off the plate to monitor the environmental conditions. All of these test fixture components were powered by DC power supplies, carefully chosen based on power requirements. Tests were performed using single board computers that ran Linux and custom written software. In order to interface with the previously described components, general purpose input output ports (GPIO) were used on the single board computers. Requirements imposed on these GPIO ports necessitated the use of 36 additional circuitry between the ports and the components themselves. The following sections will describe each of these components in more detail. 4.4.1: Plates and Tiles The base of the test fixture was a solid piece of 6061 aluminum [22]. The plate measured 45.7 cm in length, 45.7 cm in width, and 2.5 cm in thickness. The plate was unpolished, but was free from major scratches and defects. The selection of aluminum as the base material was made for two major reasons. First, aluminum was selected because of its similarity to materials commonly used in space vehicles. Second, aluminum was selected for its thermal properties. Specifically, aluminum is known for it’s good thermal conductivity, making it an excellent choice as the baseplate material for this fixture [23]. The plate dimensions were carefully selected to allow enough physical distance between the top mounted cooler units. The thickness of the plate (2.5 cm) was selected to ensure that there was enough mass to maintain a stable temperature level, while also considering that the cost of aluminum increases substantially with larger thickness. A thinner plate was considered for cost savings, but a smaller volume of aluminum would likely have introduced more temperature variability to the test fixture, and hence was not pursued. Small ceramic tiles (10.8 cm x 10.8 cm x 0.5 cm) were used as thermal isolators between the main aluminum plate and the cooler units above. Ceramic was selected due to its insulating properties and the low cost of the tiles. Ceramic has a unique, dual-bond 37 structure that not only makes the material resistant to temperature change but also to electrical conduction [24]. These tiles helped to ensure that the cooling of one unit was not transmitted to the larger plate and on to the other cooler unit. Above each ceramic tile sat the actual item whose temperature was being controlled, a small aluminum plate. This aluminum plate was also 6061 and measured 9.5 cm long, 5 cm wide and 0.3 cm thick. Although thermal separation was desired, complete isolation was to be avoided. Hence, the ceramic tile was mated to the aluminum base via thermal joint compound (Type 120 silicone). The same thermal compound was used between the ceramic tile and the small aluminum plate mounted above it. 4.4.2: Heaters To gain even and continuous heating of the aluminum baseplate from edge to edge, a custom heater was required. This heater was made to the exact dimensions of the plate in length and width (45.7 cm x 45.7 cm). The heater was constructed from fiberglass reinforced silicone rubber. This allowed for a resilient, flexible design. The heater was designed with a Pressure Sensitive Adhesive Surface (PSAS) on one side [25]. This allowed for easy attachment to the underside of the aluminum base of the test fixture. The heater was configured with the exit lead tab from which 91.4 cm Teflon leads 38 were provided. This heater ran at 28 V and 2 A. This produced a total of approximately 60 W of evenly distributed heating power to the aluminum plate. An identically designed heater (albeit smaller dimensions) was mounted on the top side of the aluminum base in the center of the plate. This heater measured 7.5 cm x 7.5 cm and also came fitted with an exit lead tab and 91.4 cm Teflon leads as well as a Pressure Sensitive Adhesive Surface (PSAS) on one side [25]. This heater ran on 28 V and 1.1 A, and delivered approximately 30 W of heating power. The purpose of this heater was to provide additional testing scenarios for the newly proposed data transmission technique. Note that these heaters were not certified for use in a vacuum. The heaters represented a large portion of the overall cost of the test fixture, and procuring heaters that were additionally capable of operation in a vacuum would have raised the cost significantly. Since no plans existed to operate the heaters in a vacuum, the decision was made to reduce cost and continue without this option. 4.4.3: Cooler Assembly Units The cooler assemblies were made of 3 major components: cooler, heat sink and fan. The central component responsible for cooling was the thermoelectric Peltier device. This device converted a voltage (applied to the device) into a temperature difference from one side of the unit to the other [26]. 39 The cooler used was a standard 4 cm x 4 cm design. The voltage and current that powered this unit depended on the temperature delta between the hot side and the cold side. This particular unit had a maximum input voltage of 15.4 V, a maximum input current of 8.5 A and could produce a maximum of 68 °C between the hot and cold side [27]. Figure 4.4 shows the heat pumped versus the delta temperature as well as the relationship of input voltage to delta temperature for a given operating current. Figure 4.4: Peltier Cooler Performance at 27 °C [27] 40 The Peltier cooling element was sold separately and had a standard dimension. The Peltier cooling element utilized here was commonly used to cool a standard sized central processing unit (CPU) in a desktop computer. As such, the Peltier was easily procured and replaced, as required. The remaining 2 components were the heat sink and the rotary fan. The total unit’s cooling efficiency was heavily dependent on the ability of the heat sink and fan to work together and move the heat being drawn away by the cooler. The heat sink was designed with small heat pipes to distribute the heat to multiple places within the heat sink. The fan was 7.1 cm in diameter and ran on 12 V and 50 mA. Heat transfer from the small aluminum plate to the cooler and from the cooler to the heat sink was extremely important. As such, all of these thermal connection points were joined via thermal joint compound (Type 120 silicone). 4.4.4: Relays The Peltier coolers mentioned above began working immediately when power was applied. As such, a relay was inserted between the cooler and the source of power. The relay used was a digitally normal open switch that controlled a circuit capable of switching high voltages and high currents. The relay also had an LED that was set to a logical high when the switch was closed. This relay had a peak voltage capability of 30 V 41 DC at 10 A, much lower than the power needed by the cooler [28]. A circuit diagram of the relay is shown in Figure 4.5. Figure 4.5: High Voltage Relay Circuit Diagram [28] The single board computer provided the control signal (D1), VCC (+3.3 V) and ground (GND) to the relay. When the relay closed, current was allowed to flow between the ON and COM ports. The relays were rated for hundreds of thousands of switch cycles and continuous use [28]. Though the relay was used at very low frequency (<1 Hz maximum), individual component testing showed that it could be switched at 10 Hz comfortably. 42 4.4.5: Temperature sensors A thermistor is a device that experiences a large change in resistance with small changes in temperature. Specifically, a NTC thermistor’s resistance decreases with rising (increasing) temperature [29]. The temperature sensor utilized in the test fixture used a Negative Temperature Coefficient (NTC) thermistor, which returned the sensed temperature as a resistance. This resistance then altered an input voltage (provided by the single board computer) appropriately. The particular thermistor used was a 10 kΩ thermistor, indicating that at ambient temperatures (approximately 25 °C), the thermistor returned an impedance of 10 kΩ [30]. The voltage value was read by the single board computer as an analog input and digitized to reflect the correct sensed temperature. This temperature sensor had an operating range of -40 °C to 125 °C and an accuracy of 1.5 °C [30]. As the temperature increased, the resistance value provided by the sensor decreased. This is reflected in Figure 4.6. Specifically, the blue line corresponding to TTC3A103-34D represents the thermistor used in this test fixture. 43 Figure 4.6: Resistance vs. Temperature, 10 kΩ NTC Thermistor [30] A schematic of the temperature sensor is also included in Figure 4.7. Note that all signals to and from the temperature sensor were provided by or received by the single board computer. 44 Figure 4.7: Temperature Sensor Circuit Diagram [28] The thermistor portion of the temperature sensor was affixed to the small aluminum plate using a small dab of thermal epoxy. The specific algorithm used to decode the sensor reading and convert it into a voltage will be discussed in more detail in Section 4.5.4.6. 4.4.6: Power Supplies Three power supplies were required for this test fixture. Of these three power supplies, two were identical as one was required for each cooler. Though each cooler had a maximum operating voltage of 15.4 V and maximum operating current of 8.5 A [27], a lower operating voltage of 12 V was selected. This decision was made in light of the fact that the power supplies already procured were rated at 12 V and 150 W [31]. These 45 power supplies were capable of providing almost 12.5 A to the cooler, well over the 8.5 A maximum current draw from each cooler. These power supplies were made by Vicor, model number VI-LU1-EV [31]. The third power supply was responsible for powering both heaters used in the test fixture. Both heaters ran on 28 V. The large heater drew 2 A, while the small heater drew 1.1 A. The unit used was capable of supplying 28 V and 5.5 A [32]. Hence, it was capable of providing the required power to both heaters simultaneously making it the only required power supply for both heating elements. Both heaters were connected to the same power supply through independent relays. This supply was made by Power Supply 1, model number PS1-150W-28 [32]. Obviously missing from this section are the power supplies required to run the single board computers. These will be discussed in Section 4.4.7. 4.4.7: Single Board Computers In order to facilitate experiments run on the test fixture, single board computers were used. Many single board computers are available in today’s market. The BeagleBone (made by BeagleBoard) was selected and is shown in Figure 4.8. 46 Figure 4.8: BeagleBone Photograph The BeagleBone was equipped with a 720 MHz super-scalar ARM Cortex-A8 processor and 256 MB RAM (400 MHz DDR2 SDRAM). Additionally, each BeagleBone had an Ethernet port for easy connectivity and communication. The BeagleBone came preloaded with a Linux distribution and all of the amenities therein, e.g., Perl, Python, etc. This Linux distribution was installed on a 4 GB micro-SD (Secure Digital) card, also resident on the BeagleBone [33]. Of particular note is the fact that on the BeagleBone resided two 46-pin headers. These headers dramatically expanded the board’s capabilities and usefulness. The two main expansion header features that were used in this test fixture were the General Purpose 47 Input/Output (GPIO) and the Analog INput (AIN) ports. The GPIO ports can be used to either input or output data. The analog input ports accept a voltage and then use a simple analog to digital converter (ADC) to convert the analog signal to a quantized numeric value. Though more will be discussed regarding the particulars, the relays discussed above were controlled via GPIO ports while the temperature sensors were read via AIN ports. The test fixture required four BeagleBone boards. Two BeagleBones were dedicated to the cooler units (one per cooler). Another BeagleBone was used for the heaters. Finally, a fourth BeagleBone was used to measure the ambient temperature of the room. Note that three of the four BeagleBone boards were used to not only control the on/off status of test fixture components, but also to record status and build data sets representative of that status. The remaining BeagleBone not included in this set of three was the one used to monitor the room temperature. No control was performed with this BeagleBone. The requirements for the AIN ports were quite stringent. These ports could not accept a voltage higher than 1.8 V [33]. Most temperature sensors commercially available at a reasonable cost operate on much more than 1.8 V. Specifically, 3.3 V is a common operating point for these temperature sensors. In order to ensure that the reported voltages from the temperature sensors in the test fixture remained below the required 1.8 V threshold, the temperature sensor return signal was required to go through a 48 voltage divider before reaching the AIN port. A standard voltage divider was designed to reduce the incoming voltage by 50% [34]. This meant that rather than expecting a voltage range of 0 - 3.3 V, the output of the voltage divider would read 0 - 1.65 V. This voltage divider was achieved using two 560 Ω resistors in a standard voltage divider network configuration, shown in Figure 4.9. Figure 4.9: Voltage Divider Circuit In the schematic above, R1 = R2 = 560 Ω. As such, V out is exactly (within allowable resistor tolerances) half the value of V in. An available accessory for the BeagleBone was a proto cape. A proto cape is simply a circuit board that allows for the design of electrical circuits using through-hole components. The proto cape was specifically designed for the BeagleBone and once assembled, was designed to mate with the 46-pin expansion headers on the BeagleBone. This accessory provided an ideal method for designing and installing the 49 necessary voltage divider circuits directly on the BeagleBone. Additionally, connectors were soldered onto the proto cape such that connecting and disconnecting the test fixture was easily accomplished. In terms of signal flow, this allowed the voltage divider to be physically and electrically located between the incoming voltages and the BeagleBone. A photograph of one of the completed proto capes, alone as well as mated with the BeagleBone, is shown in Figures 4.10 through 4.12 to provide context. The proto cape shown in these three figures is the most complex of the three proto capes and was used for test fixture heater control. The cooler control proto capes had one less voltage divider, and one less temperature sensor connector each. Figure 4.10: Completed BeagleBone Proto Cape Photograph (Top) 50 Figure 4.11: Completed BeagleBone Proto Cape Photograph (Bottom) Figure 4.12: Completed Proto Cape mated to BeagleBone 51 An individual power supply unit powered each BeagleBone. Each supply provided a clean regulated 5 V output at up to 2 A [33]. 4.5: Software Description Of equal importance to the hardware design of the test fixture was the software design and engineering. Because this test fixture was not strictly passive, there was a requirement for power and the switching of that power. Furthermore, these active components (such as the temperature sensors and the switching relays) needed to be queried for status and the returned data collected and recorded for future review and analysis. As noted previously, the BeagleBone is a very capable single board computer that came preloaded with a specific distribution of the Linux operating system [33]. Linux natively offers wonderful programming languages and file access options that aid in coding, making an embedded Linux platform an obvious choice for test fixture design [35]. In this section, the software engineering and programming side of the test fixture will be discussed in detail. Whenever possible and where practical, code examples (snippets) will be used to illustrate algorithms and approaches taken in the implementation of the test fixture’s software design. 52 4.5.1: BeagleBone Communication & Access The BeagleBone was first placed on the Local Area Network (LAN) [36] to facilitate communication. This was largely due to the fact that the BeagleBone was not equipped with peripheral access ports [33]. As such, there was no way to make use of directly connected input devices (such as a keyboard or mouse) or display peripherals (such as a monitor). Once the BeagleBone was connected to the LAN, it was assigned an Internet Protocol (IP) address to facilitate packet transmission and reception [37]. This IP address was assigned via Dynamic Host Configuration Protocol (DHCP). DHCP is a process run from within the main network router and can be envisioned as a traffic director for the entire network. The use of DHCP ensured that each device received a unique IP address so that packet transmission and reception was performed successfully [38]. Once the BeagleBone was successfully assigned an IP address, communication with the BeagleBone was achieved through a Secure Shell (SSH). SSH allows for secure data communication via a remote terminal window on another computer [39]. Essentially, this allowed another computer’s input and viewing peripherals to be used on the BeagleBone itself. For this particular test fixture, all SSH terminal windows were spawned on a central test computer (laptop). This central computer became the hub for communication to and from all four BeagleBone units used in the test fixture. The command to initiate an SSH terminal is ssh <user>@<host>, where <user> is replaced by the account name and <host> is replaced by the network name or IP address of the 53 BeagleBone. Note that upon entering this command, the user is prompted for the required password before access to the host is granted. In addition to SSH, another important protocol was used often in the test fixture software design. Secure File Transfer Protocol (SFTP) was used to transfer files to and from the BeagleBone. SFTP, like SSH, is a credential-based protocol requiring appropriate user authentication before access is granted. The command to initiate an SFTP session is sftp <user>@<host>, where <user> is replaced by the account name and <host> is replaced by the network name or IP address of the BeagleBone. Note that upon entering this command, the user is prompted for the required password before access to the host is granted. Once a session began, file transfer was initiated by using special commands such as “get” and “put” followed by the necessary filename and/or path desired [40]. 4.5.2: BeagleBone Interface to Hardware Components The software signal flow originated from real signals (voltages and currents) being input and output from the BeagleBone. This was accomplished via the expansion headers previously discussed. There were two 46-pin expansion headers, labeled P8 and P9. These expansion headers facilitated many objectives via various ports, but the specific ones used in this test fixture were the General Purpose Input/Output (GPIO) and Analog INput (AIN) ports. To better understand the role that each of these features played in the 54 overall test fixture design, each will be discussed individually and in detail. The default mapping of the expansion headers is shown in Figure 4.13. Figure 4.13: BeagleBone Expansion Header Mapping (default) [33] 4.5.2.1: General Purpose Input/Output (GPIO) Ports The sole use for GPIO pins in the context of this test fixture was to control the status of the relays (switch position open or closed as needed). The P8 expansion header 55 contained most of the GPIO connectivity that the BeagleBone offered. The addressing and numbering of GPIO ports was pivotal and was achieved by considering the chip number and the pin on that chip used for signal routing. To successfully communicate with the GPIO port from the BeagleBone command line (SSH), the GPIO port assignment needed to be converted to a single number. In the generic form, each GPIO port was assigned by gpioX_Y nomenclature, where X represented the chip number and Y represented the pin on that chip. The range of acceptable chip values was from 0 to 31. The necessary formula is described by Equation 4.2 [41]. 𝐺𝑃𝐼𝑂# = 𝑋 × 32 + 𝑌 (4.2) The decision was made to use Pin 11 and sometimes Pin 13 on the P8 expansion header for the output of control signals. Specifically, these ports were assigned gpio1_13 (P8 header, Pin 11) and gpio1_15 (P8 header, Pin 13). Using the formula above, it was clear that communication with gpio1_13 would be achieved by using the number 45 and similarly, for gpio1_15, the number 47. Once the pins were converted into numeric designators, they were used to write special files in the Linux operating system (OS). The first step was to export the pin. This action informed the OS of the desire to use this pin and in turn a file structure was created for 56 that pin. The export command is echo <Z> > /sys/class/gpio/export, where <Z> represents the numeric designator for the GPIO pin of interest (45 and 47 above) [41]. Once the pin was exported, the next step was to configure the pin for either input or output. This selection was made by writing “in” or “out” to the direction file for that pin. Note that this file was created in the pin’s file structure via the aforementioned export process. For this particular test fixture design, the desire was for the pins to be used as output pins. The appropriate method for setting the pin direction is echo <dir> > /sys/class/gpio/gpio<Z>/direction, where <Z> represents the numeric designator for the GPIO pin of interest (45 and 47 above) and <dir> is replaced by “in” for input or “out” for output [41]. Setting the GPIO pin high or low was accomplished in a very similar manner. The export command also created a value file for the pin. Hence, a zero (0) or a one (1) was written to the value file as desired to change the status of the control signal associated with that pin. The command for this operation is echo <val> > /sys/class/gpio/gpio<Z>/value, where <Z> represents the numeric designator for the GPIO pin of interest (45 and 47 above) and <val> is replaced by “0” for low or “1” for high [41]. Setting the pin high closed the relay switch and setting the pin low opened the relay switch. 57 When a pin was no longer needed it was unexported. Unexporting the pin removed the file structure that was previously created for that pin. The command used to unexport the pin is echo <Z> > /sys/class/gpio/unexport, where <Z> represents the numeric designator for the GPIO pin of interest (45 and 47 above) [41]. Note that the processes discussed above could have been accomplished from any programming language using very simple system calls. This added a high degree of flexibility to the BeagleBone, as it relieved programming language constraints on the test fixture software design and allowed the most appropriate language for the task to be selected. To demonstrate this point, the generic approach for executing a command line entry from within a Perl script is via a system call. An example of a system call in Perl is system(“echo 1 > /sys/class/gpio/gpio45/value”);. Perl is a scripting language that is included in all Linux distributions and is the language of choice for this test fixture [35]. 4.5.2.2: Analog INput (AIN) Ports The sole use for AIN pins in the context of this test fixture was to input the data read by the temperature sensors. The P9 expansion header contained all of the AIN connectivity that the BeagleBone offered [33]. Unlike the previously discussed GPIO ports, AIN port mapping and designation was very straightforward. In total there were seven (7) multipurpose AIN ports. These ports were labeled AIN0 through AIN6 [33]. Pin 39 and sometimes Pin 33 on the P9 expansion header were 58 used. Specifically, these ports were assigned AIN0 (P9 header, Pin 39) and AIN4 (P9 header, Pin 33). Similar to the special files that were used to access and control GPIO pins, AIN pins also had a special file. However, there was no need to export the pin to create the file structure. Instead, the file ain<Z> contained the value currently read in on that particular AIN port. The file was located in /sys/devices/platform/omap/tsc. The value retained in an AIN file was retrieved from the command line using a simple cat command. The command structure is cat /sys/devices/platform/omap/tsc/ain<Z>, where <Z> is the number corresponding to the AIN pin of interest. It is important to note that there was no such file named ain0. Although the AIN pins were labeled starting at 0 (0 through 6), the corresponding data was accessed by reading files with numbering starting at 1 (1 through 7). The value located in this file was a digitized representation (quantized) of the sensed analog voltage. Note that the Analog- to-Digital Converter (ADC) associated with each pin was a 12-bit successive approximation register with a maximum sample rate of 100 kHz [33], far more than the sample rate required for the test fixture. 59 4.5.3: Scripting Language Selection As briefly mentioned earlier, Perl was the language of choice for this test fixture design. Perl is described as a capable multi purpose interpreted programming language. Perl was first developed in 1987 and has grown significantly since its original inception both in scope and utility [42]. Perl and Python were both considered when selecting the language to be used in the test fixture. Both were included in the BeagleBone Linux distribution, and readily available for use. Both languages are widely used and are backed by many resources and reference manuals to aid the developer. Perl was ultimately selected due to previously gained familiarity and comfort with the language and syntax. Perl has proven itself to be the right choice for this test fixture design in many ways, but principally due to the ease with which files are accessed, read, and written in a variety of formats [42]. 4.5.4: Core Programming Functions and Algorithms The test fixture made use of several scripts, each of which performed a specific function. Certain tasks were performed repetitively within these scripts and, as such, have been made into functions. These functions were called upon within the larger scripts as necessary. Each individual function is discussed in the following sections. 60 4.5.4.1: getEpochTimeFrame() For a given test sequence, it was important to synchronize the execution of various scripts across all four BeagleBones. This was accomplished using defined start and stop times at the beginning of each script. These start and stop times were assigned to variables in the form HH:MM, where HH represented the hour and MM represented the minute of the start or stop time, respectively. Note that times were input using the 24-hour clock. The format HH:MM is useful to humans when discussing time, but it is not the way the BeagleBone references time. Computers in general represent time as the number of seconds that have passed from a reference time in the past known as an epoch [43]. The epoch reference time used by the BeagleBone Linux distribution is January 1, 1970 at 12:00:01 AM. As an example, the epoch time for the BeagleBone Linux distribution is currently 1378407351 (at the time of this writing). That number, when translated, equates to ~43.68 years, which is consistent with September 2013. In order to convert the entered start and stop times into more useful epoch times for comparison and decision making within the script, the function getEpochTimeFrame() was written. getEpochTimeFrame() received the variables containing the start and stop times. Logic was applied to these times to correctly calculate the corresponding epoch times. These newly calculated epoch times were returned to the main function for use. Note that if the stop time was earlier than the start time, it was assumed that the test 61 spanned the daybreak and that the test was scheduled to terminate on the following day. The syntax for this function was getEpochTimeFrame(<start>,<stop>), where <start> was replaced by the test start time in the form HH:MM and <stop> was replaced by the test stop time in the form HH:MM. The complete code for getEpochTimeFrame() can be found in Appendix A. 4.5.4.2: exportGPIO() Before using a GPIO pin on the BeagleBone expansion header the pin was exported. This process was a one-line command, executable from the command line. Since some test scripts made use of more than one GPIO, this process was streamlined into a function, exportGPIO(). exportGPIO() received the pin number (single number converted representation) and then made the appropriate system call to export the pin. The syntax for this function was exportGPIO(<pin>), where <pin> was replaced by the pin number. The complete code for exportGPIO() can be found in Appendix B. 4.5.4.3: unexportGPIO() When a GPIO pin was no longer being used, it was unexported. This process was a one- line command, executable from the command line. Since some test scripts made use of more than one GPIO, this process was streamlined into a function, unexportGPIO(). 62 unexportGPIO() received the pin number (single number converted representation) and then made the appropriate system call to unexport the pin. The syntax for this function was unexportGPIO(<pin>), where <pin> was replaced by the pin number. The complete code for unexportGPIO() can be found in Appendix C. 4.5.4.4: initGPIO() A necessary precursor to using a GPIO pin was defining the direction that the pin would be used, input or output. This initialization process was a one-line command, executable from the command line. Since some test scripts made use of more than one GPIO, this process was streamlined into a function, initGPIO(). initGPIO() received the pin number (single number converted representation) and the desired direction (in or out) and made the appropriate system call to set the direction of the pin. The syntax for this function was initGPIO(<pin>), where <pin> was replaced by the pin number. The complete code for initGPIO() can be found in Appendix D. 4.5.4.5: setGPIO() Once a GPIO pin was exported and initialized, its value was available to be set or read. If the pin was initialized for output, the value could be set high or low (1 or 0). If the pin was initialized for input, the value being sensed on that pin could be read. 63 Since the GPIO pins were used exclusively as output control pins (for relay switch commanding) in this test fixture, the GPIOs were always configured for the output direction. As such, the function setGPIO() was created to set the output value. setGPIO() received the pin number and the desired value and used both pieces of information to correctly set the output value of the pin. The syntax for this function was setGPIO(<pin>,<dir>), where <pin> was replaced by the pin number, and <dir> was replaced by the desired direction (in or out). The complete code for setGPIO() can be found in Appendix E. 4.5.4.6: readTemp() Temperature sensors were used throughout this test fixture. The value that was produced by the temperature sensor was not the actual temperature sensed in the environment. Rather, the temperature sensor used a thermistor to alter an input voltage (3.3 V) provided by the BeagleBone appropriately, and this voltage was used to calculate the sensed temperature. The thermistor used was a 10 kΩ thermistor, indicating that at room temperature (25 °C), the thermistor returned an impedance of 10 kΩ [30]. The voltage value was read by the BeagleBone as an analog input and digitized to reflect the resistance of the thermistor. This analog voltage was within the range of 0 V to 3.3 V. Since AIN pins would be damaged if presented with more than 1.8 V [33], the proto cape voltage divider 64 regulated this voltage between 0 V and 1.65 V, exactly 50% of the actual voltage coming back from the sensor. Once the analog voltage was divided to a safe level through the voltage divider, it was routed into the AIN port and digitized (quantized) into a 12-bit representation (4096 levels). This value was found in the appropriate AIN file for that particular pin (/sys/devices/platform/omap/tsc/ain<Z>, where <Z> was mapped to the AIN port). In order to convert this voltage to a temperature, the Steinhart-Hart relationship (Equation 4.3) was used [29]. 𝑅=𝑅 ! 𝑒 ! ! ! ! ! ! ! (4.3) In this relationship, R is the sensed resistance, R0 is 10 kΩ, B is 3975 (a number representing the slope of the thermistor’s resistance vs. temperature curve), T is the sensed temperature, and T0 is 25 °C (represented as 298.15 K). There are only two unknowns in this equation, R and T. R was provided by the temperature sensor and hence, T was easily calculated for a particular time-value of R. This process was performed multiple times in the larger complement of code in the test fixture software, and as such, the readTemp() function was created. 65 In addition to the raw conversion of resistance to temperature, readTemp() also performed the following functions: 1. Adjust the read value to account for the presence of the voltage divider 2. Average n (typically 20) samples (reads or sensor polls) to remove inherent noise from the reading The complete code for readTemp() can be found in Appendix F. 4.5.5: Main Scripts for Test Execution Several scripts were written to perform testing. Each phase of testing required specific scripts. Each script was responsible for reading and recording data from the test fixture, and additionally, some scripts also commanded particular relays to effect change in the test fixture. These scripts made use of the previously described core functions to accomplish similar tasks. The purpose of this section is to describe the overall functionality and logic in these scripts. Each of the four BeagleBones ran a script (or set of scripts) unique to the function that specific BeagleBone was designed to facilitate. Two of the BeagleBones were designed to control the cooler assembly units. The code running on these boards was 66 quite similar. The remaining two BeagleBones were responsible for performing tasks that were common to the test fixture and the environment (common to the entire test fixture). 4.5.5.1: BB.pl & PID.pl The BeagleBones that were connected to each cooler assembly were designed to accomplish two main tasks. First, these boards needed to accept, process and record the data from a cooler specific (locally mounted) temperature sensor. Second, these boards controlled the cooling element (on/off) in order to manage the temperature of the local environment (small aluminum plate). Control of the local temperature was accomplished via a number of control algorithms. As the overall goal of this research was to propose and demonstrate a new data transmission technique, it was attempted with two unique foundational control data sets, bang-bang and PID (Proportional-Integral-Derivative). A script was written for each of these control methods. Each script has a .pl filename extension, the standard and necessary extension for a Perl script [43]. The most rudimentary form of closed-loop control is the bang-bang controller [4]. This is commonly seen in home thermostats and can basically be described as a sensor-threshold based algorithm. When sensor readings are above or below the given threshold, a particular action is either initiated or ceased as appropriate. To form a basis for 67 comparison, this approach was implemented in BB.pl. The algorithm is shown in Figure 4.14 in block diagram form, and the code in its entirety can be viewed in Appendix G. Figure 4.14: BB.pl Software Block Diagram A far more capable and sophisticated control algorithm is found in the PID controller. This controller uses the sum of three error terms (Proportional, Integral and Derivative) to drive system response to the desired set point. A diagram of this control method is shown in Figure 4.15. 68 Figure 4.15: The PID Controller The Proportional term (P) is best described as the present error. The Integral term (I) is best described as the accumulation of the past errors (integration). Finally, the Derivative term (D) is best described as the prediction of the future error (rate of change of the error). Each of these terms is weighted by a gain term (K p, K i, K d). These gain terms are often determined using the Ziegler-Nichols tuning method, of which a modified version was used for this effort. A script implementing this control algorithm is provided in PID.pl and is shown in block diagram form in Figure 4.16. The code in its entirety can be viewed in Appendix H. Plant P I D d e Ki ) ( ) (t e K p dt t de K d ) ( ∑ ∑ + - + + + x(t) y(t) Set point, s(t) e(t) 69 Figure 4.16: PID.pl Software Block Diagram The main uses of these scripts were to 1) provide a basic control performance metric that future performance with the new data transmission technique would be compared, 2) facilitate data collection for baselining the test fixture performance, and 3) provide data that would be used for modeling the cooler and test fixture behavior. A sample of the data file created by BB.pl and PID.pl is shown in Figure 4.17. 70 Figure 4.17: Output Data File Example Created by BB.pl & PID.pl The data file was written every second in a tab-delimited format [44] with four columns. The first column was the sample number (incremented every second), the second column was the epoch time, the third column was the temperature (averaged over n samples, typically 20), and the final column was the status of the associated relay (which controlled power to the cooler). Note that for the cooler relay status, 0 indicates OFF (relay switch is open) and 1 indicates ON (relay switch is closed). 4.5.5.2: mbtBB.pl An additional level of concept verification was performed in which the very idea of model-based data transmission was put to the test without the implementation of model corrections. A script entitled mbtBB.pl was created to execute this style of data transmission and facilitate the corresponding control data transfer and execution. In contrast to a bang-bang or PID control approach, this script used a model-based data transmission technique to control cooler behavior and the associated local temperature. 71 The model was created from previously collected and processed data gathered with BB.pl. The model was processed and transmitted over the data link and held static throughout the test run. No algorithms or updates were used to correct the model real-time. The software logic flow diagram is shown in Figure 4.18, and the code in its entirety can be viewed in Appendix I. Figure 4.18: mbtBB.pl Software Block Diagram 72 The main use for this script was to demonstrate that an acceptable model of the plant could be derived, compressed, transmitted over the link, and used effectively by the remote system to provide some level of effective control. As such, once successfully demonstrated with bang-bang control data transmission, this phase of characterization was not duplicated for PID control data transmission. Rather, more complex forms of model-based data transmission were executed immediately as will be discussed. As shown in the block diagram (Figure 4.18), n cycles of traditional data transmission (typically 3-5) were performed before transitioning to a model-based approach. Note that mbtBB.pl and BB.pl were written to have the same output data file structure (displayed in Figure 4.17). Although not the ultimate goal of this research, this script provided a good trial for many of the core pieces of the puzzle including the establishment of the model and processing of the model on both ends (in the code and by the hardware). This script made use of a hash data structure (associative array) to ingest the model and then use the model to provide cooler control. A Perl hash (associative array) is similar to a basic array with a few differences. While an array uses an index to reference or access one particular element, a hash contains key-value pairs [43]. With a hash, the associated key, rather than a numeric index, accesses elements. Multiple values can be associated with the same key, if desired. This particular data structure is very useful when reading 73 and using complex data sets such as the model that was used in this research. This was mainly due to the fact that the various elements in the model were keyed from values that were not necessarily contiguous in nature. 4.5.5.3: acmbt1BB.pl & acmbt1PID.pl Building upon the technique proven in mbtBB.pl, acmbt1BB.pl and acmbt1PID.pl demonstrated the full realization of algorithmically corrected model-based data transmission implemented in Perl code and applied to the control of the cooler units. Both of these scripts took in models that looked identical. Although the foundational control algorithms that these scripts were built on were completely different, the output files (like the inputted models) also looked identical. These scripts used a numeral 1 in their naming convention to represent that they employed a single algorithm for determining model updates. These algorithms were unique for each script and will be discussed at length in the following chapter. Additionally, each script had a unique cycle by which model updates were executed. A flow diagram for each script can be seen in Figures 4.19 and 4.20 while the complete code can be referenced in Appendix J and Appendix K. 74 Figure 4.19: acmbt1BB.pl Software Block Diagram 75 Figure 4.20: acmbt1PID.pl Software Block Diagram 76 The data file produced by these scripts was written every second in a tab-delimited format [44] with five columns. This format was created to be nearly identical to all previously discussed output files, with one additional column displaying the number of elapsed model updates. An example of this data file is shown in Figure 4.21. Figure 4.21: Output File Excerpt (Created by acmbt1BB.pl & acmbt1PID.pl) 4.5.5.4: acmbt2BB.pl The original scope of this experimental research was centered on using bang-bang control data for transmission. The inclusion of PID control data for testing with this model- based data transmission technique was an additional objective. As such, an additional 77 level of algorithmic acuity was added to refine model accuracy and keep model updates to a minimum for the bang-bang foundational case. This was implemented in acbmt2BB.pl, where the numeral 2 represents the fact that two algorithms were at work in the code. This script was identical to acmbt1BB.pl with one additional non-real-time algorithm utilized at the start of the code. This script used previously gained knowledge of environmental factors, namely room temperature, to update the entire model appropriately before it was transmitted across the data link. In doing so, the model, and all the associated parameters stored within, was updated to reflect the predicted environmental conditions that were to transpire throughout the upcoming test period. This process ensured that the model was as closely matched as possible to the conditions for which it would be used, further ensuring that real-time updates completed by algorithm 1 (the real-time algorithm) would be minimized. This modification added one more logical element to the block diagram shown for acbmt1BB.pl and can be seen in Figure 4.22. The complete code can be seen in Appendix L. 78 Figure 4.22: acmbt2BB.pl Software Block Diagram 79 The data file produced by this script was written every second in a tab-delimited format [44] with five columns. The scripts acmbt2BB.pl and acmbt1BB.pl were written to produce the same output data file structure. An example of this data file is shown in Figure 4.21. 4.5.5.5: heaterControl.pl Affixed to the bottom of the large aluminum baseplate was an edge-to-edge heating element. A similar, but much smaller, heater was attached to the top side of the large aluminum plate. Both heaters were used to stabilize and/or modify the plate temperature as desired for the test being performed. The heaterControl.pl script was used to accomplish control of the heaters. The control method used for the heaters was of little significance to the larger goal. As such, a simple bang-bang control was used to toggle power to the heating elements to maintain a desired plate temperature. Plate mounted temperature sensors were used as sensor input for the heater control loop. The algorithm is shown in Figure 4.23 in block diagram form, and the code in its entirety can be viewed in Appendix M. 80 Figure 4.23: heaterControl.pl Software Block Diagram The script heaterControl.pl was created to produce an identical output file structure to those written by the cooler control scripts discussed earlier. Data was sampled each second and recorded to a tab-delimited output file. Note that the bottom-mounted large heater and the top-mounted small heater were controlled identically (with the same threshold resulting in the same on/off times). If desired, each heater could have been controlled independently using different thermistors for input. If so, the algorithm and the data output file would have required adjustment to capture the pertinent status information. Begin Current Time ≥ Start Time? Open Output File exportGPIO(<pin>) initGPIO(<pin>,out) setGPIO(<pin>,0) Get Current Time No readTemp(<AIN>) Yes Temp < Limit & Heater OFF? Temp > Limit & Heater ON? No Write Output setGPIO(<pin>,1) setGPIO(<pin>,0) Yes Yes Get Current Time Current Time < Stop Time? No Yes End No 1 second Delay 81 4.5.5.6: roomTempRecord.pl In addition to knowing the temperature of the plate and the local temperature of each cooler, it was important to know the surrounding air temperature (environment). This information was critical in understanding the dynamics and behavior of the overall test fixture. In order to sample this data, a script titled roomTempRecord.pl was created. Unlike the cooler and heater control scripts, this particular script had no control requirement. The only objective of roomTempRecord.pl was to read data from a temperature sensor mounted in the room where the test was performed. As such, the output data file produced was slightly different in structure since there was no need for a relay status column. A small excerpt of the output file can be seen in Figure 4.24. Figure 4.24: Output Data File Excerpt Created by roomTempRecord.pl 82 The algorithm is shown in Figure 4.25 in block diagram form, and the code in its entirety can be viewed in Appendix N. Figure 4.25: roomTempRecord.pl Software Block Diagram 4.5.6: Data Processing & Analysis Code The data collection process was only the first step in understanding the performance of the test fixture. Once the data was collected it was processed, presented in an appropriate manner, and then dissected to draw meaningful conclusions. Microsoft Excel is a spreadsheet tool that has become the industry standard for comparing, trending and displaying large quantities of data. The output files produced Begin Current Time ≥ Start Time? Open Output File Get Current Time No readTemp(<AIN>) Yes Write Output Get Current Time Current Time < Stop Time? End No 1 second Delay Yes 83 by the Perl code were specifically designed (tab-delimited) to be easily imported into Excel, with each piece of information being assigned its own cell in the spreadsheet. Once the data was imported into Excel, large amounts of tedious manual effort was then required to complete the analysis and present the data. For this reason, an automated solution was explored. Completing this task is not necessarily the forte of the Perl scripting language. A more compelling choice is MATLAB by Mathworks. MATLAB (short for MATrix LABoratory) is described as a “high-level language and interactive environment for numerical computation, visualization, and programming [45].” The same central computer that was tasked with coordinating and controlling the BeagleBones was also equipped with MATLAB, and used for data analysis. Two MATLAB scripts, dataAnalysisReport.m and dataAnalysisReportUpdates.m, were written to perform the following functions: 1. Ingest all the data produced (3 data files) by the test fixture for a single test into data arrays a. Cooler unit temperature, cooler relay status and (if available) model update status b. Baseplate temperature and heater relay status c. Room environment temperature 84 2. Condition the data arrays to ensure all arrays contain the same number of points 3. Convert Unix epoch time stamps to MATLAB epoch format 4. Plot cooler temperature, cooler relay status, baseplate temperature, room temperature and (if available) model update status on zoom-enabled, double Y-axis plots 5. Analyze the data arrays to calculate the following metrics: a. Epoch timestamps for cooler relay status changes (from on to off or off to on) b. On and off time durations (whole seconds) for each cooler relay cycle c. Sliding window average (regression) of on and off time durations as a function of time 6. Plot on and off time durations (seconds) and associated regression lines, baseplate temperature and room temperature on a single, zoom-enabled plot 7. Save associated plots and figures for future analysis 85 8. Process and calculate Figure of Merit (FOM) contributors for each test, including: a. Time cooler unit temperature spends outside a predefined range b. Maximum and minimum excursions of cooler unit temperature from a defined set point c. Average cooler unit temperature for test duration Utilizing a scripting language such as MATLAB to perform this trending, analysis and presentation was significantly more efficient than manually performing these tasks with a spreadsheet editor such as Excel. In fact, the script described above provided the desired results in just seconds. This allowed test data to be reviewed in a matter of minutes following test completion. Additionally, automating these tasks simplified any necessary changes to the desired analysis algorithms. Once a change was proposed, it was implemented in code, and the raw data was re-processed in batch, quickly and efficiently. Examples of the plots produced by this MATLAB code are shown in the next chapter. The dataAnalysisReport.m and dataAnalysisReportUpdates.m MATLAB scripts can be viewed in Appendix O and Appendix P. 86 4.6: Test Fixture Characterization Before implementing a new, cutting edge data transmission technique, it was important to verify correct operation and functionality of the individual components used in the test fixture. For some of the components, this was simply done by exercising the unit to ensure that it had not been received damaged. For other components, proper verification required functionality testing and calibration. Finally, once each of the individual components was verified and calibrated, the entire system (test fixture) was run to ensure total system functionality. In doing so, intermediate fittings, connectors and wires were indirectly verified. 4.6.1: Component Level Verification & Calibration Every device or component used in the test fixture was in need of verification and/or calibration. In this section, this process will be discussed and documented. 4.6.1.1: BeagleBone Initialization & Verification Though the BeagleBone is a well-designed sophisticated device, each one was probed and verified to a reasonable level. Each board ran the same Linux distribution, so initialization and verification was performed repetitively on each board in the same manner. 87 As described earlier, communication with the BeagleBone required connecting each one to the network (LAN). For ease of use, each particular BeagleBone was given the same IP address upon boot up. To accomplish this goal, DHCP reservations were made. A reservation table is available in the network router so that individual components can be identified upon network connection and given the same IP address each time. There are a few ways for the network router to identify a unique piece of hardware. The most common approach is to identify the Media Access Control (MAC) address [38]. The MAC address is like a fingerprint, and each BeagleBone has one. It is a unique 12-digit hexadecimal identifier that takes the form ab:cd:ef:gh:ij:kl (e.g., 24:c0:8d:95:1e:f5). By configuring the DHCP reservation table with these MAC addresses and a desired IP address to be associated with each MAC address, each BeagleBone was given the same IP address every time the test fixture was powered on. In pre-planning the output data file format, it was clear that each BeagleBone would need to be in sync with respect to time. Fortunately, the BeagleBone allowed for usage of the Network Time Protocol (NTP) [46]. NTP was configured within the file structure of the Linux OS and once the time zone was selected, system time was synced to a publicly accessible time server. By syncing each BeagleBone to the same time server, they were essentially synced to one another with regard to time. 88 In addition to verifying proper network connectivity and time initialization, the selection of which GPIO ports remained. In order to easily verify proper operation of the GPIO port, a simple circuit was designed and built using a Light-Emitting Diode (LED) [41]. Connecting the LED to the GPIO pin on one side and the BeagleBone ground pin on the other, visual confirmation of the GPIO operation was provided. When setting the GPIO pin high, the LED turned on; when setting it low, the LED turned off. With so many options, and only two (at most) GPIO pins required from each board, it seemed that GPIO pin selection would be fairly straightforward. However, using this simple circuit to test the GPIO pins led to a significant discovery. The most obvious initial choice for a GPIO pin was gpio1_6 (38), the first usable GPIO pin on the P8 expansion header [33]. When gpio1_6 was set high the LED would glow brightly, however, when gpio1_6 was set low, LED brightness was reduced, but the LED was not completely turned off. A multimeter proved that a small voltage leaked through gpio1_6 even when the pin was set low. Further testing and evaluation proved that this was the only GPIO pin that behaved abnormally. As such, alternate GPIO pins gpio1_13 (45) and gpio1_15 (47) were selected for test fixture use. This discovery was critical to the overall functionality of the test fixture. As these GPIO pins would ultimately control power supply flow to test fixture components, low voltage leaks to the relays could have resulted in undesired current flow to units during data collection and testing. 89 Finally, the added voltage divider circuit on the BeagleBone proto cape required individual verification. The consequence of this circuit not being wired correctly was significant. One of the most expensive components of the test fixture was the BeagleBone. Protecting these boards is a priority in the overall design. Before mating the proto cape to the BeagleBone, a voltage was supplied to the front end of the voltage divider and the output was measured. Once each voltage divider circuit was verified for proper operation, mating of each proto cape to the associated BeagleBone was allowed. 4.6.1.2: Relay Characterization & Verification Each relay was individually tested to ensure proper operation. This was accomplished with a very simple script that toggled the relay via the GPIO pin. All the relays (total of 8) functioned properly. Some relays displayed sensitivity to connector movement. If the connector cable was moved slightly, relay performance would become intermittent. Although all test fixture components were to be static in placement and were not to be moved during test execution, the four best relays (least sensitive to movement) were selected as primary parts, while the remaining four relays were retained as spares. 4.6.1.3: Temperature Sensor Verification & Calibration Each temperature sensor was verified for correct operation. This included verifying the electrical contact at the connector, especially given the sensitivity to movement noted on 90 some relays (both from the same manufacturer). All temperature sensors were found to have excellent connectivity and robustness. Individual variations in circuit tolerances can result in slight reading inaccuracies from the sensors. As such, individual calibration of each sensor was required. As mentioned previously, the Analog-to-Digital Converter (ADC) for each Analog INput (AIN) port quantized the incoming signal into 4096 levels [33]. Additionally, this number, 0 through 4096, was directly and linearly translated into the resistance value of the thermistor, which was subsequently used to calculate the sensed temperature using the Steinhart- Hart equation. As such, the maximum level of 4096 was modified in the appropriate relationship to scale (calibrate) each individual sensor appropriately. Each sensor’s thermistor bead was epoxied to a surface (either the large or small aluminum plate). A very accurate temperature-sensing unit (surface probe) was used to measure the temperature of the surface at various times. This value was then compared to the temperature reported by the nearby temperature sensor. Careful and meticulous modifications were made to the maximum quantization level until the independent measurement methods agreed. This process was repeated several times, at various temperatures and times of day, for each sensor to ensure accurate readings. 91 4.6.1.4: Heater Verification The heaters used in this test fixture were very simple and straightforward devices. The heaters were custom made to size and power requirements. Each heater was verified using two methods. First, the heater was connected to a power supply providing the heater with the specified (required) voltage. A current measurement was made in line to ensure that the heater drew the correct current at the specified voltage. This measurement allowed the power rating of the heater to be verified against the stamped value. The large 60 W (rated) heater drew 2.1 A at the supplied 28 V. The small 30 W heater drew 1.1 A at the supplied 28 V. Second, the heaters were validated by a combination of analysis and empirical data collection. The change in energy required to elevate the baseplate to 36 °C from room temperature (25 °C), was approximated by Equation 4.4 [47]. 𝑄=𝑚 × 𝐶 ! × 𝑇 ! −𝑇 ! (4.4) In Equation 4.4, m is the mass of the plate (14335.4 g), C p is the specific heat of 6061 Aluminum at constant pressure (2.7 g/cm 3 ), and the quantity 𝑇 ! −𝑇 ! represents the change in baseplate temperature. Evaluating this equation for a change of 11 °C, resulted in a Q value of 141,920.46 J. Given the confirmed wattage of the large heater (60 W), and the definition of a Joule (1 J = 1 Ws), a predicted time was calculated to achieve the final plate temperature. By this analysis, a time of approximately 40 minutes was calculated to 92 heat the plate from room temperature to 36 °C. By the same approach, both heaters used in tandem (a total of 90 W) would heat the plate in approximately 27 minutes. Both of these numbers were confirmed empirically, further verifying the performance of the heaters. 4.6.1.5: Cooler Verification Each cooler unit was also individually verified. Power was supplied to both active components (Peltier cooler and rotary fan) in the 3-piece unit, and proper functionality was verified for both. Additional unit testing was incorporated in sub-system level verification and will be discussed in Section 4.6.2. 4.6.2: Sub-system Level Verification After verification of individual components was performed, intermediate level functionality was verified. Ultimately, the purpose of this testing was to verify that temperature sensor data could be read by the BeagleBone (via the proto cape) and used to make decisions that would successfully drive the relay state. This level of verification incorporated the sensor, BeagleBone connectivity and proto cape circuitry, and the relay together for the first time. All of the necessary electrical connections were made, with the relay connected to an LED. A hair dryer was used to heat the temperature sensor on demand. In real time, the 93 BeagleBone software (Perl) processed this data, responded accordingly, and triggered the relay at the appropriate time. This allowed the LED to light up and turn off per the algorithm programmed and running (Perl) on the BeagleBone. Completing this sub- system evaluation and verification, provided the appropriate level of certainty that total system level integration would be successful. 4.6.3: Test Fixture Baseline Having performed a detailed verification of individual components and even the end-to- end sub-system string, the final verification task was to assemble and baseline the complete test fixture. Once assembled, initial test fixture baselines were performed. These preliminary baselines ranged from minutes to hours in duration, and led to observations and discoveries that resulted in test fixture design modification. Subsequently, multipurpose extended duration characterizations were performed to determine trends and test fixture dependency (if any) on external variables, and to burn- in components. Burn-in testing is commonly performed in test scenarios in order to detect and allow for unit failure before prolonged testing commences [48]. These 24- hour baseline tests were run back-to-back for several days. Results of both the short and extended baseline characterizations are described hereafter. 94 4.6.3.1: Initial Test Fixture Baseline Four tests ranging from 5 to 8 hours were performed. For these tests, a simple bang-bang control algorithm was implemented for the cooler units. The plate temperature was maintained between 40 °C and 41 °C, while the cooler units were controlled between 35 °C and 37 °C. These tests revealed a large variability in cooler performance. Some cycles (cooling from 37 °C to 35 °C), took 50% more cooling time (cooler on time) to complete, compared to cycles immediately before or after. Unsure of the exact cause of this variation, an investigation was initiated to determine the root cause. In an attempt to decrease this variation, the following measures were taken. The Heating, Ventilation and Air Conditioner (HVAC) flow to the room housing the test fixture was eliminated, removing the source of a significant environmental variation. Another series of short tests were performed with the HVAC system off. These test results showed continued variation, but the large shifts in cooling cycle times were eliminated. In order to reduce the minimal variation that remained, the investigation continued. Air from the coolers’ rotary fans was found to be blowing through the heat sinks and directly onto the plate mounted temperature sensors. To prevent this airflow from contaminating 95 the temperature readings, custom cut insulation housings were created for each of the four temperature sensors used in the test fixture (see Figure 4.26). Figure 4.26: Temperature Sensor Insulation Housings Once made, these housings were mounted over the temperature sensor and thermistors, completely shielding each pair from undesired external influences. One final modification was made in the power routing to the cooling units. Originally, the cooling element (Peltier) and the rotary fan received power simultaneously. When the relay for each cooler unit would close, both the fan and the cooling element were engaged. This 96 design allowed the initial heatsink temperature (starting temperature for each cycle) to vary, being largely dominated by the ambient air temperature. As Peltier cooler performance is significantly affected by the temperature delta between the hot and cold side of the element [26], reduction of potential variation was desired. The power distribution was, therefore, redesigned to decouple the power to the fan and the cooling element. A small power supply rated at 12 V and 2 A output was modified to supply power to the fans continuously during testing. Figure 4.27: DC power plug soldered to power cooler fans 97 Figure 4.27 shows the branching of power from the small power supply to each fan (one for each cooler unit). Figure 4.28 shows decoupling of power from the cooler’s rotary fan and thermoelectric device. The black and red wire pair running to the top of the photograph power the cooler’s rotary fan (now energized by the dedicated supply), while the red and black wire pair running beneath the heatsink power the thermoelectric cooling element. Figure 4.28: Individual power to the Peltier and the rotary fan 98 After completing these modifications, a series of additional tests were performed. These tests demonstrated that much of the remaining variation inherent to the original test fixture design had been successfully removed. 4.6.3.2: Diurnal Characterization and Test Fixture Burn-in With the internal test fixture variation sources addressed, attention turned to characterizing the external factors. This characterization process required extended duration continuous testing in order to identify trends and correlations in the data set. Additionally, extended duration tests allowed the perfect opportunity for burn-in testing. The parameters for the extended duration tests were to keep the plate at approximately 36 °C while using a simple bang-bang control approach to maintain the local cooler controlled temperature between 25 °C and 27 °C. Although the HVAC system was turned off, the temperature in the room varied diurnally as the outside temperature fluctuated. Each test ran for 24 hours and 6 tests were run back-to-back. Figures 4.29 and 4.30 display the raw data from the first full diurnal characterization. 99 Figure 4.29: First Diurnal Characterization, Raw data Figure 4.30: First Diurnal Characterization (Zoomed), Raw data 100 A few observations were made from this data. First, it was clear that the cooler continuously cycled through the 24-hour test. No long lapses were noted in which the cooler remained on or off for a long period of time. Second, the plate temperature stayed elevated for the duration of the test, at the desired 36 °C. Third, the local cooler controlled temperature seemed to be controlled well to the desired thresholds, with the expected typical overshoot of a bang-bang controller [4]. Finally, the room temperature demonstrated a very well behaved sinusoidal trend expected from a typical day-night environment. The presentation of this data also confirmed, yet again, correct operation of all test fixture components including hardware and software, and now, even verification of the analysis tools. The first diurnal characterization revealed critical information with regard to parameter correlation, which previous short duration tests did not show clearly. Though the raw data presented above was meaningful, further investigation and analysis of the same data produced a more telling picture that formed the basis for model-based data transmission techniques. Analysis of cooler on and off time durations yielded the following data presentation (see Figure 4.31). 101 Figure 4.31: First Diurnal Characterization, Per-cycle metrics From these processed per-cycle metrics, a clear trend and correlation emerged. It appeared that the diurnal profile of the room temperature was driving the cooler on and off time durations. Though the minimum and maximum room temperature throughout the day was only separated by approximately 1 °C, as the temperature rose, it had a dramatic impact on cooler performance. As the on time durations rose, the off time durations fell. In fact, the total on/off cycle stayed nearly exactly the same. Essentially, for every second change in on time duration per cycle, nearly the same but opposite change was noted in the off time duration. 102 Curious whether this trend would continue, 5 more 24-hour characterizations were performed with the same test parameters. The results are displayed in Figure 4.32 through 4.47. Figure 4.32: Second Diurnal Characterization, Raw data 103 Figure 4.33: Second Diurnal Characterization (Zoomed), Raw data Figure 4.34: Second Diurnal Characterization, Per-cycle metrics 104 Figure 4.35: Third Diurnal Characterization, Raw data Figure 4.36: Third Diurnal Characterization (Zoomed), Raw data 105 Figure 4.37: Third Diurnal Characterization, Per-cycle metrics Figure 4.38: Fourth Diurnal Characterization, Raw data 106 Figure 4.39: Fourth Diurnal Characterization (Zoomed), Raw data Figure 4.40: Fourth Diurnal Characterization, Per-cycle metrics 107 Figure 4.41: Fifth Diurnal Characterization, Raw data Figure 4.42: Fifth Diurnal Characterization (Zoomed), Raw data 108 Figure 4.43: Fifth Diurnal Characterization, Per-cycle metrics Figure 4.44: Sixth Diurnal Characterization, Raw data 109 Figure 4.45: Sixth Diurnal Characterization (Zoomed), Raw data Figure 4.46: Sixth Diurnal Characterization (Zoomed), Raw data 110 Figure 4.47: Sixth Diurnal Characterization, Per-cycle metrics These 6 identical tests revealed two findings. First, it was clear that the relationship and correlation between cooler on/off time durations and room temperature was very repeatable. Second, the final test confirmed that burn-in testing was critically important. As the data shows, the cooler suffered a catastrophic failure in the last few hours of the final test. Though the power was continuously being supplied, the cooler failed to reduce the local temperature, remaining ineffectively in the on state until the stop time was reached. The baseline process was instrumental in identifying necessary modifications in the test fixture design, early failures, and even correlations and trends in the data. After 111 performing test fixture repairs (cooler replacement), this data was used to begin the formulation of a model-based data transmission technique. 112 Chapter 5: Data & Findings 5.1: Analysis Metrics & Definitions Having discussed the overall goal of this experimental research, the test fixture build process, initial phases of test and identified data correlations, attention is directed to the first implementations of the model-based data transmission technique. As a reminder, the overall goal of this model-based approach is to reduce communication requirements while still providing adequate data reproduction so as to allow the system to be controlled well by the transmitted data. In order to properly analyze and compare the collected data, appropriate metrics must be determined to assess the success with which the data is replicated and the amount of bandwidth that is saved. Qualitatively, one might look at the graphs (figures) presented of the two methods of data transmission and make an assessment based on how similar the two are graphically. However, a more quantitative process is required if we are to draw meaningful results from the data being presented. As such, a Figure of Merit (FOM) parameter was devised that allowed quantitative comparison of the traditional and model-based data transmission techniques. The FOM for the traditional data transmission technique is defined as 1 to provide a benchmark for the proposed technique that is to be evaluated. This is shown in Equation 5.1. 113 𝐹𝑂𝑀 !"#$%&%'(#) = 1 (5.1) The FOM for model-based test runs is a multifaceted metric that evaluates three key qualities of the cooler’s temperature data as compared to the same values for the traditional data set run in similar conditions. The general relationship is shown in Equation 5.2. 𝐹𝑂𝑀 !"#$%!!"#$% = 1−𝐴−𝐵−𝐶 (5.2) The first variable term, A, represents the amount of time spent outside of a defined tolerance window (delta to the traditional case). Specific out-of-tolerance limits were defined identically for compared cases of traditional and model-based data sets. Limits are typically ±1 °C from the set point for bang-bang and ±0.2 °C from the set point for PID. As not all test runs were identical in length (to the second), this value is defined as a ratio of the out-of-tolerance time to the total test time as shown in Equation 5.3. 𝐴= !"#!!"!!"#$%&'($ !"#$ !"#$%!!"#$% !"#$% !"#$ !"#$ !"#$%!!"#$% − !"#!!"!!"#$%&'($ !"#$ !"#$%&%'(#) !"#$% !"#$ !"#$ !"#$%&%'(#) (5.3) The second variable term, B, represents the relative size of the largest excursion (high or low) from the desired set point. Like the previous value, this item is also a ratio and a delta of the model-based ratio and the traditional ratio as shown in Equation 5.4. 114 𝐵= !"#$%&' !"#$%&'() !"#$%!!"#$% ! !"#$%&' !"#$%&'() !"#$%&%'(#) !"# !"#$% (5.4) In every case the traditional test shared the same set point value with the model-based run. As such, the denominator is shared. The final variable term, C, represents the relative average of the data sets. Again, this item is also a ratio and a delta of the model-based ratio and the traditional ratio and is shown in Equation 5.5. 𝐶= !"#$%!" !"#$%!!"#$% ! !"#$%&# !"#$%&%'(#) !"# !"#$% (5.5) The combination of these three terms, and their subtraction from the ideal benchmark of FOM Traditional, provides a number slightly different from 1 that allows for thorough quantitative comparison. With the FOM metric, we can capture the long term qualities of the data (average), individual abnormalities in the data such as spikes (excursions), and even overall trends and differences that might be small and very difficult to see by visual inspection (out-of-tolerance comparison). FOM is an indicator of the ability of the new data transmission technique to correctly and adequately transmit and update the data set. An additional metric is necessary to 115 quantify the benefits in bandwidth savings (transmission efficiency). In general, a simple ratio can be used to calculate the data transmission savings as shown in Equation 5.6. 𝐷𝑎𝑡𝑎 𝑇𝑟𝑎𝑛𝑠𝑚𝑖𝑠𝑠𝑖𝑜𝑛 𝐼𝑚𝑝𝑟𝑜𝑣𝑒𝑚𝑒𝑛𝑡= !"#$% !"#$% !"#$%&'!!() !"#$%&%'(#) !"#$% !"#$% !"#$%&'!!() !"#$%!!"#$% (5.6) Expanding the numerator and denominator requires more thoughtful analyses. For this metric, a straightforward accounting of the data bits coming and going is required. Each transmission is counted as 1 byte, whether it is a temperature sensor reading or a relay state command. Each of these transmissions should be thought of as “up” and “down.” Consider a satellite and a ground terminal. “Up” refers to data transmission from the ground terminal to the satellite and “down” is the reverse. Applying this convention, in the traditional data transmission approach, when bang-bang control data is transferred, one temperature sensor reading is transmitted from the satellite to the ground terminal each second. Once processed, a relay command is sent to the satellite (only if deemed necessary) to change the configuration of the relay. One full cycle is defined as two toggles of the relay state. Armed with the duration of the test, Equation 5.7 can then be used to determine the total bytes transmitted. 𝑇𝑜𝑡𝑎𝑙 𝑏𝑦𝑡𝑒𝑠 𝑡𝑟𝑎𝑛𝑠𝑚𝑖𝑡𝑡𝑒𝑑 !"#$%&%'(#) !"#$!!"#$ = 1 !"#$ ! + 2 !"#$% !"!#$ (5.7) 116 Similar consideration is given when determining the total data transmission for the traditional approach when PID control data is being transmitted. In this situation, the information transmitted down is one temperature sensor reading each second. Once processed, a relay command is sent to the satellite (only if deemed necessary) to change the configuration of the relay. The increased complexity of the PID controller drives up the number of required relay cycles necessary in the control data set. Though not expressly evident in the second term of Equation 5.8, this reality is reflected in the data presented in Section 5.3.2 (PID Control Data Transfer). 𝑇𝑜𝑡𝑎𝑙 𝑏𝑦𝑡𝑒𝑠 𝑡𝑟𝑎𝑛𝑠𝑚𝑖𝑡𝑡𝑒𝑑 !"#$%&%'(#) !"# = 1 !"#$ ! + 2 !"#$% !"!#$ (5.8) In contrast to these traditional data transmission calculations are the model-based data transmission calculations. The appropriate way to account for the data transmitted here is to divide all transmission into two groups, one-time and recurring. One-time transmissions are those that take place at the start of model-based data transmission, are relatively large in nature, and are only completed once. Only one type of transmission falls in this category and that is the model itself. For bang-bang control data, the model was compressed into 4 bytes/cycle or ~150 bytes for a 2 hour test (see actual model length for precise value). Recurring transmissions are those that take place as often as deemed algorithmically necessary. Model updates are defined as recurring transmissions and 117 have both up (one byte) and down (two bytes) components for a total of 3 bytes per update. This is represented by Equation 5.9. 𝑇𝑜𝑡𝑎𝑙 𝑏𝑦𝑡𝑒𝑠 𝑡𝑟𝑎𝑛𝑠𝑚𝑖𝑡𝑡𝑒𝑑 !"#$%!!"#$% !"#$!!!"# =~150 𝑏𝑦𝑡𝑒𝑠 + 3 !"#$% !"#$%& (5.9) Likewise, performing model-based data transmission calculations when PID control data is being transferred is shown in Equation 5.10. 𝑇𝑜𝑡𝑎𝑙 𝑏𝑦𝑡𝑒𝑠 𝑡𝑟𝑎𝑛𝑠𝑚𝑖𝑡𝑡𝑒𝑑 !"#$%!!"#$% !"# =~3000 𝑏𝑦𝑡𝑒𝑠 + 2 !"#$% !"#$%& (5.10) Of particular note is the dramatic increase in the necessary length of the model due to the increased complexity of the controller data being transmitted. Also worth noting is the reduction in the necessary amount of data required to perform a model update as compared to Equation 5.9 (three versus two). This savings comes from the fact that in the PID update algorithm, only one byte of data is needed from the satellite per update versus two required for the bang-bang controller. In the coming sections, FOM and Data Transmission Improvement metrics will be noted on the figures displaying test data as demonstrated in Figure 5.1. 118 Figure 5.1: Sample of Analysis Metrics Presentation The Data Transmission Improvement (DTI) is presented in the orange circle on the left while the FOM is presented in the green circle on the right. For traditional data transmission cases, DTI does not apply and no metric (orange circle) is presented. 5.2: The Model-based Technique (MBT) Before a model can be devised, there are many infrastructure related decisions that must be made. First, the factors influencing and contributing to the behavior of the system in need of control must be determined. Some of the dominant factors for the system of interest were identified; the most prominent of which was the environment, specifically, Case%1a:%ACMBT%(Minimal%Disturbance),%1%Algorithm% 10# FOM# # 0.96# DTI# # 42.5x# 119 room temperature. Second, the manner in which the model is stored, transmitted and utilized must be determined. Finally, the accuracy and longevity of the model must be assessed to achieve a better understanding of when and if model updates are required [49]. To break down this large design problem into manageable segments, a preliminary phase of model-based control was performed. This phase did not incorporate model updates, but instead addressed the first and second points mentioned earlier: 1) factor influence determination and characterization, and 2) model infrastructure design, representation and software logic. This portion of concept verification was implemented with bang-bang control data only. This phase of testing was approached in the following manner. First, the test fixture was run using a traditional data transmission approach with bang-bang control data for approximately two hours. During this time data was collected and stored. Upon completion, this data was processed and distilled into a manageable and usable model. The model was then used to implement a model-based data transmission technique across the data link for a subsequent test of similar duration to the collection phase. During this test, the transmitted model data was used to control the test fixture. 120 As was noted in the diurnal characterization, the room temperature was clearly a dominant factor in cooler performance. In fact, the room temperature was so influential that turning the HVAC off was required during testing. The first step in this process was to perform a normal data collection period. This was accomplished using BB.pl, in an identical approach to that used in the diurnal characterizations. This data was considered the benchmark in terms of performance and was given a FOM of 1. This collection was successfully performed and the data is presented in Figures 5.2 and 5.3. Figure 5.2: Model Data Collection (Raw Data) FOM$ $ 1$ 121 Figure 5.3: Model Data Collection (Per-cycle metrics) The next phase was to distill this data into a usable model. Developing the appropriate software infrastructure to handle the model data was critical. The data structure chosen for this particular task was a Perl hash. Storing the cooler and test fixture information in this data structure streamlined the process of parsing and writing the data to output files for later use. MATLAB was an integral part of the processing for this research. In this case, the test fixture data was already being processed by code written in MATLAB. Once this code was executed, the variable workspace was populated with distilled data from the files [50]. Specifically, cooler on and off time durations for each cycle were determined and 122 the associated start time for each cycle was found. Slight cycle-to-cycle variation caused a scattering of on and off time durations. As such, the MATLAB analysis script also generated a regression line to best fit this data. An additional script, modelGenerator.m, was used to write this information to a file, which would be used by mbtBB.pl. Recall that mbtBB.pl was a script written and resident on the BeagleBones associated with each cooler unit. This script was responsible for ingesting the model (text file produced by modelGenerator.m) and using this model (transmitted over the data link) to control the cooler units. Other than pure organization and model file creation, modelGenerator.m was also responsible for another critical task. Since the epoch time reference and overall structure is very different from MATLAB to Perl (and Unix), modelGenerator.m translated the MATLAB epoch times to Perl epoch times before creating the model output file. Without this critical step, the model output files generated by modelGenerator.m would have been unusable by the BeagleBones. A sample excerpt of the model file can be seen in Figure 5.4, while modelGenerator.m can be seen in its entirety in Appendix Q. Figure 5.4: Model Excerpt (Created by modelGenerator.m) 123 The columns from left to right are the cycle number, the epoch (Unix format) cycle start time, the cycle on time duration (in seconds), and finally, the cycle off time duration (in seconds). Notice that the summation of the cooler on time and off time durations for any given cycle is also the difference between one cycle’s start time and the next cycle’s start time. This is a good reminder that the epoch time is the number of seconds elapsed from a unique seed time. Also notice that the on and off times listed in this file are from the best-fit (regression) line of the original on and off time data [9]. The model does not contain every cycle recorded during the data collection. Specifically, the first five cycles are removed. The first five cycles represent controller and temperature settling time, and are not representative of the steady state performance of the model. As such, n preliminary cycles (normally three) were accomplished via traditional data transmission and when they were complete model-based data transmission was initiated. The model output file was renamed to an appropriately defined title, such as modelLeft.txt or modelRight.txt, in the script code. These files were then transferred to the BeagleBone using SFTP for subsequent use by mbtBB.pl. After implementing this model-based data transmission technique (no updates), moderate success was observed. Recall that the mbtBB.pl script began with n cycles of traditional data transmission before transitioning to a pure model-based approach. This transition is evident in the data, shown in Figures 5.5 and 5.6. 124 Figure 5.5: MBT, Bang-bang (Raw Data) Figure 5.6: MBT, Bang-bang (Per-cycle metrics) FOM$ $ 0.64$ DTI$ $ 54.4x$ 125 When reviewing this data, a clear transition was noted between the use of the traditional data transmission method (first three cycles) and the new model-based data transmission approach. When model-based data transmission was initiated, the effective control limits were shifted lower. Note that this was not by design. Other than these deviations, control of the system while data was transmitted using the model-based data transmission technique resulted in marginal success at controlling the local temperature. This was quantified by a FOM of 0.64 (compared to 1 for the traditional case). This was expected as the data showed a large bias downwards in control limits that went uncorrected due to the lack of model updates. The DTI metric boasted a 54.4x savings in data transmission for this test execution. 5.3: The Algorithmically Corrected Model-based Technique (ACMBT) The process outlined in Section 5.2 was utilized in this section beginning with a model data collection phase, followed by model generation, and finally, a model-based data transmission run. Different from the process outlined in Section 5.2, however, was the introduction of ACMBT (model updates) in the final phase of this process rather than MBT (no updates). Successful execution of the model-based technique proved the model format, infrastructure and approach, to be viable building blocks for further development of the larger goal, ACMBT. The model-based technique results showed that significant 126 bandwidth savings were achievable, but these savings came at the cost of system control performance. This was anticipated as the execution of model updates, a major piece of the conceptual puzzle, was missing from the model-based technique. In this section, data is presented that shows the effect of adding model updates. These model updates are performed using a variety of algorithms (hence the term “Algorithmically Corrected”), which are unique to the type of data transmitted, as will be discussed in detail. Two distinct types of control data were transmitted across the data link in independent case studies for this research. They are discussed thoroughly and independently as bang- bang and PID control data sets. The algorithms required to execute model updates in the case of one control data set were not appropriate in the case of the other. This can likely be generalized to all sets of data in need of update, as algorithms that leverage unique qualities of the data set (and are therefore specific in nature to that set) will be the most efficient in succinctly updating the larger set. Finally, as the application of this data transmission concept is to the control theory arena, tests were executed in a variety of disturbance conditions for each control data type. In general terms, these disturbance types can be broken down into 2 main groups, natural and induced. A natural disturbance is defined as a change in the ambient room 127 temperature (previously discussed as the largest disturbance mechanism to the test fixture). Of this variety, some tests are subjected to a small natural disturbance (nearly quiescent room temperature for the duration of the test) and others are subjected to a large natural disturbance (~0.5 °C change in room temperature over the duration of the test). An induced disturbance is defined as a change to the test fixture’s plate temperature. Nominally, the plate temperature is held at 36 °C. In cases where an induced disturbance was used, an increase of the test fixture’s plate temperature from 36 °C to 38 °C was commanded. This was accomplished using the bottom and top mounted plate heaters in tandem to ensure that the maximum wattage possible would be delivered to the fixture per unit time. Note that in every case in which an induced disturbance was used, a large (~0.5 °C change) natural disturbance was also observed simultaneously. This will be discussed at greater length in each subsection as the data is presented. All disturbance shifts are displayed clearly on the plots as “Room Temp” or “Plate Temp” and are discussed in the text surrounding each section. 5.3.1: Bang-bang Control Data Transfer 5.3.1.1: Model Update Algorithms Before discussing the data, it is important to understand the algorithms behind the model updates. There are either one or two algorithms at work in any given case involving bang-bang control data transfer and ACMBT. One is a real-time algorithm and can be 128 described as a phase shift algorithm while the other is non-real-time and uses amplitude scaling to achieve a one-time model update. 5.3.1.1.1: Phase Shift Algorithm, Φ (Real-Time) The model update approach used in every case where bang-bang control data was transmitted was the Phase Shift Algorithm. As demonstrated in Section 5.2 and shown in Figure 5.4, the model is a time sensitive array of cooler (relay) on times, followed by on and off durations in seconds. Early in the characterization portion of this research, a series of diurnal tests were performed in which the relationship between external factors (disturbances) and overall performance was trended over a variety of conditions and time. As is the case with many space-based systems, this test fixture proved to have certain tendencies and behaviors that were linked strongly and reliably to disturbance mechanisms. The overwhelming question was how to quantify and measure (autonomously) the amount by which to apply a model update based on these relationships. Four possible scenarios were identified in which cooler temperature data would be presented to the onboard algorithm for processing to be either flagged for model update or passed as good (no update required). These scenarios are shown in Figure 5.7. 129 Figure 5.7: Cooler Temperature Data Scenarios Pre-Model-Update Due to the control type employed (bang-bang) and the expected overshoot as a result of this choice, the cause for the performance shown in Figure 5.7(a) was the control method itself. As such, no algorithm was implicitly designed and implemented in real-time to account for these excursions. Figure 5.7(b) shows a condition in which no update was needed as all data was within the defined limits. The remaining cases were very similar in nature. The demonstrated cause for the performance shown in Figure 5.7(c) and (d) was the slowly changing disturbance profile over the course of the test. If left uncorrected, this would eventually result in performance similar to that observed in the Model-based Technique (Section 5.2). (a) (b) (c) (d) 130 The Phase Shift Algorithm corrected the two conditions shown in Figure 5.7(c) and (d), by shifting the model to a more appropriate position in the model for the disturbance conditions that were being experienced by the system. Once the need for a model update was determined, the onboard system flagged the remote terminal by transmitting the necessary information. In this case, the necessary pieces of information were two stored temperature sensor readings. These readings were the temperature sensor readings from the last time the cooler was turned off and the last time the cooler was turned on. In other words, these were the temperature readings at the endpoints of the last cooler cycle. In a perfect world these values would be identical to the lower (cooler off time) and upper (cooler on time) threshold limits, 28 °C and 30 °C, respectively. Of course, since cooler relay operation was based from a stored model onboard, there was, at times, some delta. In fact, taking the average of the deviation from the upper limit and the deviation from the lower limit was instrumental in determining how to shift the model. This is referred to as the out-of-tolerance average (OOT average) and is shown in Equation 5.11. 𝑂𝑂𝑇 !"#$!!" = ! !" !! !""#$ ! (! !"" !! !"#$% ) ! (5.11) The next step was to perform a real-time computation to determine the near-real-time efficiency of the cooler. Since the temperature readings at the turn on and turn off points of the cooler were known, only the times at which these commands were executed were needed to determine the cooler’s efficiency (performance over time). Fortunately, this 131 information is contained in the model, which resides with the remote (ground) terminal. Therefore, there was no need for additional bandwidth transmission to obtain this information. Using Equation 5.12 the ΔC of the cooler was extracted, which represents the change in local temperature (measured in °C) per second. ∆𝐶= ! !" !! !"" ! !"" !! !! (5.12) In Equation 5.12, upper case “T” refers to the transmitted temperature sensor readings from the onboard system while the lower case “t” refers to the corresponding times at which the relays were commanded (and the sensors were read). The first step was to determine the amount by which to shift the model. This was determined by simply dividing average OOT value by ΔC to arrive at an expected or predicted amount of time to slow down or speed up the cycle period to get the model performance on track. This value is represented by Φ and is shown in Equation 5.13. 𝛷= !!" !"#$!%# [°!] ∆! °! !"#$%&! (5.13) 132 Note the dimensional analysis in Equation 5.12 showing that the resulting value, Φ, is indeed in the unit of seconds and may in fact be forwarded directly to the onboard system in order to be applied to model as a model update. Finally, it is worth noting that the conditions shown in Figure 5.7(c) and (d) would result in opposite polarities for each equation (Equations 5.11, 5.12 and 5.13). This is by design. The resultant value of Φ would either be positive or negative depending on the scenario encountered and the necessary direction of shift. The polarity of Φ ensures a proper model update is transmitted. 5.3.1.1.2: Amplitude Scaling Algorithm, α (Non-Real-Time) An additional algorithm was added to improve performance and reduce the number of model updates that were needed real-time. This algorithm uses an amplitude scaling approach and runs once, conditioning the model before it is transmitted across the data link for the first time. As such, it results in no adverse impact to bandwidth overhead, and has a demonstrated benefit to the FOM as will be shown. Due to the length of the data collection phase and subsequent ACMBT test run, the most significant natural disturbance, the room temperature, varied substantially. Even with insulation protection from the wind, etc., the natural diurnal changes outside when played out over a four-hour window (two-hour data collection phase and two-hour 133 ACMBT run) became a factor. However, as was shown with the Phase Shift Algorithm, external factors can be subdued with the correct algorithm as will be shown in this chapter. Since cooler performance is related very closely and repeatedly to the room temperature, this relationship was exploited and extrapolated at the start of the ACMBT run to scale the model to the current room temperature. The assumption was made that the room temperature would continue to grow or decline at the same pace as was noted during the model collection phase. The confidence to make this assessment, again, came from the thorough diurnal characterization that was performed early in the test fixture build process. Using the cooler on time and off time durations from the model collection data, two values were determined using an identical method for each. The goal was to find out how much these durations changed (percentage) over the course of the data collection period. This was accomplished using the approach shown in Equation 5.14. 𝛼= 1+ ! ! !! !!! !! !!! ! ! ! !! ! !! ! ! ! !! ! !! ! (5.14) In Equation 5.14, s represents the data set, either the array of cooler on time durations or off time durations. For this calculation, they are treated the same. The last three points 134 in the set were averaged to avoid an errant result due to an outlier. Similarly, the first three points are averaged. The difference is found as a percentage of the starting quantity and one is added to make α, a multiplier. This process was followed for both the cooler on and off data sets and individual multipliers, α on and α off, were found. The multipliers were applied to each value in the appropriate column of the model and each array value was rounded to the nearest second before initial model transmission across the data link. In certain cases, this algorithm was crucial. However, this algorithm was not needed or used in all disturbance cases. Use of this algorithm in conjunction with the Phase Shift Algorithm is noted in the description of the ACMBT approach in the following sections. 5.3.1.2: Case 1a – Minimal Natural Disturbance The first case for ACMBT attempted to reduce the external variations experienced by the test fixture. As such, no induced disturbance was applied, and the time of day was carefully selected in order to minimize the natural disturbance. This can be clearly seen by the nearly flat red line (“Room Temp”) in Figure 5.8, where the data collection information is presented. 135 Figure 5.8: Model Collection, Case 1a, Bang-bang (Raw Data) Note that the data collection phase also served as a benchmark for performance and was assigned a FOM equal to 1 as displayed in the green circle in the upper right-hand corner of Figure 5.8. This is due to the fact that the method of data transmission used during this data collection phase was the traditional data transmission method where each byte of sensor data was transmitted “down” and each byte of actuator data was transmitted “up,” as needed (see Section 5.1). Just as before, this data was distilled into cycle-by-cycle metrics, which was very close to the model representation. This is shown in Figure 5.9. Case%1a:%Bang+Bang%Control%(Minimal%Disturbance)% 8" FOM" " 1" 136 Figure 5.9: Model Collection, Case 1a, Bang-bang (Per-cycle metrics) The collection phase was immediately followed by model creation and an ACMBT test run. Figures 5.10 and 5.11 display the raw data results and the cycle-by-cycle metrics. 137 Figure 5.10: ACMBT – 1 Algorithm, Case 1a, Bang-bang (Raw Data) Figure 5.11: ACMBT – 1 Algorithm, Case 1a, Bang-bang (Per-cycle metrics) Case%1a:%ACMBT%(Minimal%Disturbance),%1%Algorithm% 10# FOM# # 0.96# DTI# # 42.5x# 138 A few observations were made immediately. First, it was clear from Figure 5.10 that 11 updates were required over the duration of the test, noted by the cyan colored line in Figure 5.10. Two large periods of model update inactivity were seen, one at the beginning of the test period and one closer to the end of the test duration. The one closer to the end of the test duration (~01:45 - ~02:10) was of greater significance as it indicated that the algorithm had successfully worked to correct the data set deep into the test run and after the model had been perturbed by a number of updates. This was promising as it indicated inherent stability of the model update processing, and underlying algorithm interactions. Additional observations were made with regard to the FOM and DTI metrics. Analysis showed a FOM of 0.96 indicating nearly identical performance to the traditional data transmission method benchmarked in the data collection phase and shown in Figure 5.8. This near-identical performance came with dramatic benefit to bandwidth utilization. The DTI metric showed a 42.5x savings in transmission across the data link when comparing the information transmitted with ACMBT to the traditional data transmission method. As an example for future cases, a detailed DTI calculation is presented in Appendix R for this test case. 5.3.1.3: Case 1b – Large Natural Disturbance The next case for ACMBT was performed in the presence of a larger varying natural disturbance. Specifically, this is defined as ~0.5 °C of room temperature variation during the test duration. This change in room temperature can be clearly seen by the increasing 139 slope of the red line (“Room Temp”) in Figure 5.12, where the data collection information is presented. Figure 5.12: Model Collection #1, Case 1b, Bang-bang (Raw Data) Again, a FOM of 1 was assigned to the data collection performance, as this set the benchmark for the ACMBT test run. Figure 5.13 immediately follows, showing the per- cycle metrics. Case%1b:%Bang,Bang%Control%(Gradual%disturbance)% 11" FOM" " 1" 140 Figure 5.13: Model Collection #1, Case 1b, Bang-bang (Per-cycle metrics) Once processed and ready for use, the model was used in concert with the Phase Shift Algorithm for an ACMBT data transmission test. The results are presented in Figures 5.14 and 5.15. 141 Figure 5.14: ACMBT – 1 Algorithm, Case 1b, Bang-bang (Raw Data) Figure 5.15: ACMBT – 1 Algorithm, Case 1b, Bang-bang (Per-cycle metrics) Case%1b:%ACMBT%(Gradual%disturbance),%1%Algorithm% 13# FOM# # 0.44# DTI# # 23.3x# 142 Both visually and analytically, from the FOM metric, it was clear that this execution of ACMBT was not as successful as previous iterations. After the first three cycles, which were performed using a traditional data transmission technique, ACMBT was commenced. Immediately, model updates were initiated and consistently performed for the duration of the test, 29 in total. More troubling than the sheer number of updates was the fact that the updates were ineffective at replicating the traditional data transmission’s ability to maintain tight system control. This was clearly identified in the FOM value of 0.44. The most significant contributing factor to this low FOM value was the consistently low bias of cooler temperature within the threshold limits (reminiscent of part (d) of Figure 5.7). Furthermore, the constant transmission of model updates had a clear impact on the DTI metric, which was significantly lower than the previously seen values, at 23.3x. Hypothesizing that the performance change was mainly due to the new disturbance conditions, an analysis of the model data proved this to be true. As the model data was collected it depended largely on the disturbance profile. In this particular case, that disturbance profile began at a room temperature of ~26.5 °C and ended at ~27 °C. The ACMBT run, however, operated within a completely different disturbance profile, beginning at ~27 °C and ending at ~27.5 °C. This rendered the model stale and inappropriate for use resulting in poor effective system performance. 143 Though the Phase Shift Algorithm was robust at correcting relatively small external variations and inherent discrepancies in the model, it was evident that another algorithm was required. The role of this algorithm would be to evaluate disturbance conditions at the start of the ACMBT run and update the model once before transmission to make the model fit the current conditions. This algorithm was created and named the Amplitude Scaling Algorithm. Another series was run using this new two-algorithm ACMBT approach in which the Amplitude Scaling Algorithm was used (non-real-time) and, as before, the Phase Shift Algorithm was run real-time. The data is presented in Figures 5.16 and 5.17. Figure 5.16: Model Collection #2, Case 1b, Bang-bang (Raw Data) Case%1b:%Bang,Bang%Control%(Gradual%disturbance)% 14# FOM# # 1# 144 Figure 5.17: Model Collection #2, Case 1b, Bang-bang (Per-cycle metrics) Based on the information gathered here, the Amplitude Scaling Algorithm determined values of α on = 1.16 and α off = 0.91. This means that all the model values for cooler on time durations were scaled up (increased) by a factor of 1.16 and the off time durations were scaled down (decreased) by a factor of 0.91. This model manipulation took place before the first byte of the model was transmitted across the data link. As such, the implementation of this algorithm had only a positive impact on DTI. The results of this ACMBT run are shown in Figures 5.18 and 5.19. 145 Figure 5.18: ACMBT – 2 Algorithms, Case 1b, Bang-bang (Raw Data) Figure 5.19: ACMBT – 2 Algorithms, Case 1b, Bang-bang (Per-cycle metrics) Case%1b:%ACMBT%(Gradual%disturbance),%2%Algorithms% 16# FOM# # 0.96# DTI# # 33.4x# 146 These results demonstrated the need for the Amplitude Scaling Algorithm. Four real- time model updates were required during the ACMBT test run. Not only did the FOM (0.96) show comparable performance to the traditional data transmission method, but using the scaling algorithm reduced the frequency of real-time model updates and the benefit was noted in the DTI metric (33.4x) showing the efficacy and success of the algorithm. 5.3.1.4: Case 2 – Induced Disturbance The final, and most environmentally demanding, case performed was the induced disturbance case. The model data collection was performed like all previous cases and the information is presented in Figures 5.20 and 5.21. As discussed at the beginning of Section 5.3, the induced disturbance is a controlled increase in plate temperature from 36 °C to 38 °C, followed by a plateau at 38 °C and a subsequent return to 36 °C. The plate temperature is represented by the green line in Figure 5.20 (“Plate Temp”). Also note that in addition to the induced disturbance, there was an underlying natural disturbance affecting the test fixture as demonstrated by the red line in Figure 5.20 (“Room Temp”). 147 Figure 5.20: Model Collection, Case 2, Bang-bang (Raw Data) Figure 5.21: Model Collection, Case 2, Bang-bang (Per-cycle metrics) Case%2:%Bang+Bang%Control%(Large%disturbance)% 17# FOM# # 1# 148 Due to the presence of the large natural disturbance, and the research and proven results of the two-algorithm approach in this environment, this method was employed here. α- values of α on = 1.10 and α off = 0.86 were determined and applied to the original model before initial data transmission. The ACMBT test data is presented is Figures 5.22 and 5.23. Figure 5.22: ACMBT – 2 Algorithms, Case 2, Bang-bang (Raw Data) Case%2:%ACMBT%(Large%disturbance),%2%Algorithms% 19# FOM# # 1.04# DTI# # 38.8x# 149 Figure 5.23: ACMBT – 2 Algorithms, Case 2, Bang-bang (Per-cycle metrics) This environmentally strenuous case demonstrated ACMBT’s ability to replicate the overall system performance of traditional data transmission while saving large amounts of bandwidth. In fact, the FOM, which compares three different aspects of the system’s control behavior under ACMBT to that while under traditional data transmission, was actually slightly higher for the ACMBT test run (FOM = 1.04) than that for the benchmark data collection run (FOM = 1). As the numbers are very close, this increase is not likely significant. However, it is fair to say that the performance is nearly identical (mathematically based on the FOM comparison metric) to the performance noted while under traditional data transmission. This is extremely noteworthy considering this was achieved while also transferring 38.8x (DTI) less data across the data link. 150 5.3.2: PID Control Data Transfer The data previously being transmitted across the data link was bang-bang control data. This data was collected using the bang-bang control method in conjunction with a traditional data transmission technique across the data link. The research was expanded to consider the transfer of data collected using the PID controller in conjunction with a traditional data transmission technique across the data link. In the same fashion, the process began with model data collection, continued to model generation, and culminated with ACMBT test execution. As before, the data collection phase also served as a benchmark (FOM = 1) for ACMBT comparison, to judge the success of overall data replication as evidenced by system control performance. Additionally, the DTI metric served as an indication of the bandwidth burden or savings on the data link. 5.3.2.1: Model Update Algorithm One algorithm was devised and used for PID control data transfer. This algorithm is a real-time algorithm and is called the Average-based Cycle Priority Substitution Algorithm. The model used for PID control data is identical to the model used to transfer bang-bang control data. Four bytes of information were required per cooler relay cycle and these four bytes made up one entry in the model. The PID controller demanded 151 many more relay toggles by design, and as a result, more entries were present in the model. A running average of the sensed local cooler temperature is maintained onboard. Every 15 or 20 cycles (a full on/off cycle is ~7 s) this running average is checked onboard against a threshold. If the previous model performance is exceeded (running average of sensed temperature is on average high), the next cycle is re-prioritized and dedicated solely to cooling. For example, if the upcoming cycle were to be 2 seconds of cooler on time followed by 4 seconds of cooler off time, the new condition (post-update) would be converted to 6 seconds of cooler on time by the algorithm. Similarly, had the running average of the sensed temperature yielded a lower value than expected (previously seen by the model) the post-update condition would have been reversed. Note that this algorithm is very much a threshold-based algorithm, not a PID-based algorithm. The model, however, is based on PID control data. So, the model was intrinsically improved over the bang-bang control algorithm though this comes at the cost of the model data size (now much larger). However, the model update algorithm employed here (compared to the bang-bang model update algorithms) was similar in complexity (threshold-based). 152 5.3.2.2: Case 1 – Large Natural Disturbance The first case was performed only in the presence of natural disturbances. As such, the plate temperature was held constant for the duration of the test. Unlike previously presented cases with natural disturbances, this case showed a negatively sloping room temperature rather than a positively sloping room temperature due mainly to time of day (early morning). This trend, along with the model data collection information, is presented in Figure 5.24. Figure 5.24: Model Collection, Case 1, PID (Raw Data) Also worth noting is the far better overall system control performance observed with the PID controller. In this case, the desired set point was 29 °C, and this was very clearly Case%1:%PID%Control%(Gradual%disturbance)% 25# FOM# # 1# 153 maintained with ease. The tendency may be to compare this control performance to the control performance of the bang-bang controller, but this should be avoided. This research is not a study of controller performance or optimization of control theory! It is instead an investigation into an alternate method of transmitting data. As a side note, and to be fair, the control limits for the bang-bang case could have been set much tighter to improve the performance, and in turn, would have come at the cost of additional relay cycling; a tradeoff that we will now see the PID concedes. Figure 5.25 shows the processed per-cycle metrics. Two things are clear from this data. First, observing the y-axis (left) the average cooler on/off durations were much lower than those recorded with the bang-bang control data. Whereas the bang-bang cooler on/off durations were usually 50-60 s long, these values were less than 5 s or 10 s on average. 154 Figure 5.25: Model Collection, Case 1, PID (Per-cycle metrics) What this implies is that the model requires more data for the same test duration. A constant four bytes are required per cycle. The aforementioned decrease in average cycle time directly results in more cycles over the test duration. This translates to more model entries (one entry per cycle) and hence a more data intensive model. This is evident when comparing Equations 5.9 and 5.10. With this said, the comparison of interest is not between bang-bang and PID, but rather, between the model data collection run and the ACMBT run. This data is shown in Figures 5.26 and 5.27. 155 Figure 5.26: ACMBT – 1 Algorithm, Case 1, PID (Raw Data) Figure 5.27: ACMBT – 1 Algorithm, Case 1, PID (Per-cycle metrics) Case%1:%ACMBT%(Gradual%disturbance)% 27# FOM# # 0.81# DTI# # 2.2x# 156 Comparing Figure 5.24 to Figure 5.26 shows the traditional data transmission method to have the clear advantage when it comes to overall system performance. This was analytically confirmed by the FOM calculation of 0.81. However, ACMBT performance was still acceptable considering the DTI of 2.2x. A total of 20 model updates were required. An interesting dynamic is noted at the end of this data set. A long period of time elapsed with no model updates (~06:37 until end). This period of time coincided with a distinctive disturbance profile rise from ~25.5 °C to ~26.6 °C. This rise actually returned the environment back to conditions that more closely resembled (matched) the range of conditions seen at the end of the model collection period. This data seems to further indicate that disturbance modeling (predictive) was vital in reducing the model updates and hence data transmission requirements. 5.3.2.3: Case 2 – Induced Disturbance The final disturbance case performed with PID control data transfer was the induced disturbance case. This was the most environmentally demanding case performed for PID control data transfer. The model data collection was performed like all previous cases and the information is presented in Figures 5.28 and 5.29. 157 Figure 5.28: Model Collection, Case 2, PID (Raw Data) Figure 5.29: Model Collection, Case 2, PID (Per-cycle metrics) Case%2:%PID%Control%(Large%disturbance)%% 28# FOM# # 1# 158 The plate temperature is represented by the green line in Figure 5.28 (“Plate Temp”). Also note that in addition to the induced disturbance, there was an underlying natural disturbance affecting the test fixture as demonstrated by the red line in Figure 5.28 (“Room Temp”). ACMBT performance is presented is Figures 5.30 and 5.31. The induced disturbance definitely provided challenges to ACMBT. This was evident not only in the model update activity, but also the resultant control performance. The FOM of 0.78 reflected this observation. However, the system remained under control and the model was updated appropriately and successfully, as desired. This was accomplished while achieving a DTI of 2.1x over the traditional data transmission technique. Figure 5.30: ACMBT – 1 Algorithm, Case 2, PID (Raw Data) Case%2:%ACMBT%(Large%disturbance)% 30# FOM# # 0.78# DTI# # 2.1x# 159 Figure 5.31: ACMBT – 1 Algorithm, Case 2, PID (Per-cycle metrics) 5.4: Findings Summary The findings presented previously are summarized in this section for ease and comparison. The tables present all pertinent information for each case including the disturbance scenario, type of data transmission algorithm employed, number and type of algorithm used (ACMBT only), and the FOM and DTI results obtained. As a general rule, comparisons should not be made from one table to another. This is another way of restating the previous discussion that no comparisons should be made between the bang-bang controller and the PID controller. Additionally, comparisons should be avoided between dissimilar disturbance profiles (case numbers) within the 160 same table unless special attention is given to the different algorithms employed (as were necessary due to the environmental differences). 5.4.1: Bang-bang Control Data Transfer Table 5.1: Summary of Data (Bang-bang Control Data Transfer) The MBT demonstrated that considerable DTI could be achieved at the cost of overall system control performance, as no model updates were employed. This was a two-edged sword in the sense that no bandwidth was required to transmit the model updates, at the same time, the potential benefit of the updates was not realized, resulting in a low FOM (0.64). Introduction of model updates, ACMBT, with only one algorithm (Phase Shift) improved the overall performance dramatically. FOM values rose to 0.96, using either one or two algorithms depending on the disturbance profile, while sustaining DTI values of 42.5x down to 33.4x, respectively. Finally, the most demanding disturbance profile (Case 2) proved that the two-algorithm approach was robust, delivering a FOM that slightly exceeded that of the traditional data transmission method along with a DTI of 38.8x. FOM FOM DTI FOM DTI FOM DTI 1a Minimal-(Natural) 1 4 4 0.96 42.5x 4 4 1b Gradual-(Natural) 1 0.64 54.4x 0.44 23.3x 0.96 33.4x 2 Large-(Induced) 1 4 4 4 4 1.04 38.8x Case Disturbance Condition (Method) Data3Transmission3Technique Traditional ACMBT Phase3Shift3Only Phase3&3Amplitude MBT (No3corrections) 161 5.4.2: PID Control Data Transfer Table 5.2: Summary of Data (PID Control Data Transfer) A summary of the data resulting from the application of ACMBT to the transfer of PID control data is shown in Table 5.2. Test cases were performed in 2 disturbance scenarios. One was a gradual and naturally induced disturbance, while the second case was a large induced disturbance (albeit also combined with an underlying natural disturbance as well). Both cases resulted in lower overall system control performance (FOM ~0.8) than the benchmark of the traditional data transmission case (FOM = 1). Despite this degraded performance, both cases demonstrated greater than 2x DTI. Careful examination of the control behavior of these cases revealed that the system was still under control. As is so often the case, a trade exists as to whether the DTI is of greater importance to the designer than the additional control fidelity offered by traditional data transmission. FOM FOM DTI 1 Gradual.(Natural) 1 0.81 2.2x 2 Large.(Induced) 1 0.78 2.1x ACBMT (Cycle+Priority+Substitution) Case Disturbance Condition (Method) Data+Transmission+Technique Traditional 162 Chapter 6: Concluding Remarks 6.1: Summary The growing demands on the communications link have led to interest in new and innovative techniques for data transmission. The purpose of this work was to propose and thoroughly research through experimentation a potential solution; the use of model- based data transmission with real-time updates or ACMBT. The hypothesis was that this proposed approach would provide an alternative data transmission technique that would provide significant savings in bandwidth utilization. Aided by a specifically designed test fixture and custom written software, ACMBT was demonstrated. A variety of data environments were used and two different data sets were implemented as test cases. In both cases, data transmission was successfully demonstrated and model-based updates were shown to be viable and essential in maintaining robust system performance. Additionally, bandwidth efficiencies were gained in all cases without exception. Quantitative comparisons were also made. Metrics and systems were devised to allow for accurate comparison of test cases, and were presented. The use of models for real-time data transmission has been quantified and demonstrated in this research. Whether ACMBT will be useful in a particular application or not will depend on the particular system design requirements, and specifically, if the system can 163 tolerate local excursions and non-uniformities. This effectively opens the trade space to allow for quantitative discussions that previously could not have taken place. These discussions should be focused on specific system design and implementation needs versus the performance that ACMBT offers. 6.2: Recommended Future Research There is still much research to be done in the area of algorithmically corrected model- based data transmission. Specific to the scope of this work, it is clear that algorithms are critical to the success of the overall technique. Furthermore, as demonstrated in this research, the algorithms are unique to the data being transmitted. Perhaps certain algorithms exist, and may be found one day, that are ubiquitous to all data sets, but for now it appears that in order to achieve the primary goal of bandwidth reduction, the algorithm must be unique to the data set. It is recommended that research be directed toward the aim of algorithm investigation, both in the general sense and for specific implementations. This is also true for the model format selection. The model format selected here was most appropriate for bang-bang control data transmission. Furthermore, it was clear that transmission of PID control data using a model intended for bang-bang control data transmission resulted in a dramatic reduction in DTI. It is recommended that research be performed to investigate additional model formats or possibly, if a ubiquitous model form is viable for all data sets, without compromising the DTI. 164 Finally, the specific application of ACMBT selected for the context of this research was the area of space and real-time satellite thermal control data transmission. However, the opening pages of this work identified this general concept as a new data transmission technique that could be applied to any other area. By virtue of selecting a test platform for demonstration, the scope is immediately reduced. In light of this reality, it is recommended that future efforts be directed towards researching the ways that ACMBT might be used in other venues. This may be in space, perhaps for reduction of intra- constellation communication or even outside of the space arena altogether. For example, submarines are often bandwidth constrained due to the difficulties of transmission through salt water. ACMBT might provide a good alternative solution in this application. 165 References [1] G. Jean, “Remotely Piloted Aircraft Fuel Demand for Satellite Bandwidth,” National Defense Magazine, Jul-2011. [2] M. Weik, Communications Standard Dictionary. Springer, 1995. [3] D. G. Gilmore, Satellite Thermal Control Handbook. Aerospace Corporation Press, 1994. [4] R. C. Dorf and R. H. Bishop, Modern Control Systems, 9th ed. Prentice Hall, 2000. [5] W. J. Larson and J. R. Wertz, Eds., Space Mission Analysis and Design, 3rd edition, 3rd ed. Microcosm, 1999. [6] “Telemetry Tutorial.” L-3 Communications Telemetry West. [7] “Report Concerning Space Data System Standards Telemetry, Summary of Concept and Rationale,” CCSDS 100.0-G-1, Dec. 1987. [8] D. Hastings and H. Garrett, Spacecraft-Environment Interactions. Cambridge University Press, 2004. [9] J. H. Mathews and K. K. Fink, Numerical Methods Using Matlab, 4th ed. Pearson, 2004. [10] J. Straub, “Model Based Data Transmission: Analysis of Link Budget Requirement Reduction,” Communications and Network, vol. 04, no. 04, pp. 278–287, 2012. [11] J. Reedy and S. Lunzman, “Model Based Design Accelerates the Development of Mechanical Locomotive Controls,” SAE International, Warrendale, PA, 2010-01- 1999, Oct. 2010. [12] M. Ahmadian, “Model based design and SDR,” 2005, vol. 2005, pp. 19–19. [13] G. Simon, G. Karsai, G. Biswas, S. Abdelwahed, N. Mahadevan, T. Szemethy, G. Peceli, and T. Kovacshazy, “Model-based fault-adaptive control of complex dynamic systems,” in Proceedings of the 20th IEEE Instrumentation and Measurement Technology Conference, 2003. IMTC ’03, 2003, vol. 1, pp. 176 – 181. [14] M. Nikolaou, “Model predictive controllers: A critical synthesis of theory and industrial needs,” in Advances in Chemical Engineering, vol. Volume 26, Academic Press, 2001, pp. 131–204. 166 [15] M. D. Rayman, P. A. Chadbourne, J. S. Culwell, and S. N. Williams, “Mission design for deep space 1: A low-thrust technology validation mission,” Acta Astronautica, vol. 45, pp. 381–388, Nov. 1999. [16] “The Space System Challenge,” Aerospace.org. [17] M. G. Lawrence, “The Relationship between Relative Humidity and the Dewpoint Temperature in Moist Air: A Simple Conversion and Applications,” Bulletin of the American Meteorological Society, vol. 86, no. 2, pp. 225–233, Feb. 2005. [18] L. Osborn, “Average Annual Humidity in California,” Feb. 2013. [19] “Creating a Bill of Materials,” arenasolutions.com, 2013. [Online]. Available: http://www.arenasolutions.com/resources/articles/creating-bill-of-materials. [Accessed: 22-Feb-2013]. [20] “System Engineering Fundamentals.” Department of Defense, System Management College, Defense Acquisition University Press, Jan-2001. [21] S. Lee, “How to select a heat sink,” Electronics Cooling. 01-Jun-1995. [22] R. Sanders, “Technology innovation in aluminum products,” JOM, vol. 53, no. 2, pp. 21–25, Feb. 2001. [23] J. F. Cochran and D. E. Mapother, “Superconducting Transition in Aluminum,” Phys. Rev., vol. 111, no. 1, pp. 132–142, Jul. 1958. [24] A. R. V. Hippel, Dielectric Materials and Applications. Artech House, 1995. [25] I. Benedek, Pressure-Sensitive Adhesives and Applications, 2nd ed. CRC Press, 2004. [26] “Thermoelectric Coolers Basics,” tec-microsystems.com, 2012. [Online]. Available: http://www.tec-microsystems.com/EN/Intro_Thermoelectric_Coolers.html. [27] CUI, Inc., “CP85 8.5 A Peltier Module Datasheet.” CUI, Inc., 08-May-2012. [28] Seeed Studio, “GROVE System Datasheet.” 21-Jan-2013. [29] T. D. McGee, Principles and Methods of Temperature Measurement. John Wiley & Sons, 1988. [30] TKS, “NTC Thermistor TTC03 Series Datasheet.” THINKING ELECTRONIC INDUSTRIAL Co., LTD., Oct-2006. [31] Vicor, “FlatPAC Datasheet.” Vicor, Oct-2002. 167 [32] Power Supply 1, “150 W Power Supply Datasheet.” Power Supply 1, 2012. [33] G. Coley, “BeagleBone Rev A6 System Reference Manual.” 09-May-2012. [34] P. Horowitz and W. Hill, The Art of Electronics, 2nd ed. Cambridge University Press, 1989. [35] “What Is Linux: An Overview of the Linux Operating System | Linux.com,” Linux.com | The source for Linux Information, 03-Apr-2009. [Online]. Available: https://www.linux.com/learn/new-user-guides/376-linux-is-everywhere-an- overview-of-the-linux-operating-system. [Accessed: 22-Feb-2013]. [36] G. A. Donahue, Network Warrior, Second Edition. O’Reilly Media, 2011. [37] C. Hunt, TCP/IP Network Administration. O’Reilly Media, Inc., 2010. [38] B. Kercheval, DHCP: A Guide to Dynamic TCP/IP Network Configuration, 1st ed. Prentice Hall PTR, 1999. [39] D. J. Barrett, R. E. Silverman, and R. G. Byrnes, SSH, The Secure Shell: The Definitive Guide, Second Edition. O’Reilly Media, 2005. [40] E. Siever, A. Weber, S. Figgins, R. Love, and A. Robbins, Linux in a Nutshell, 5th Edition, Fifth Edition. O’Reilly Media, 2005. [41] M. Richardson, “How-To: Get Started with the BeagleBone,” MAKE, 14-Mar-2012. [Online]. Available: http://blog.makezine.com/2012/03/14/how-to-get-started-with- the-beaglebone/. [Accessed: 22-Feb-2013]. [42] D. Sheppard, “Beginner’s Introduction to Perl,” Perl.com. 16-Oct-2000. [43] L. Wall, T. Christiansen, and J. Orwant, Programming Perl, 3rd ed. O’Reilly Media, 2000. [44] E. S. Raymond, The Art of UNIX Programming, 1st ed. Addison-Wesley, 2003. [45] The MathWorks, Inc., “MATLAB Overview,” mathworks.com, 2013. [Online]. Available: http://www.mathworks.com/products/matlab/. [Accessed: 22-Feb-2013]. [46] ntp.org, “Network Time Protocol,” ntp.org, 2012. [Online]. Available: http://www.ntp.org/. [47] P. Hill and C. Peterson, Mechanics and Thermodynamics of Propulsion, 2nd ed. Prentice Hall, 1991. 168 [48] D. Wilkins, “The Bathtub Curve and Product Failure Behavior, Part One - The Bathtub Curve, Infant Mortality and Burn-in,” Reliability HotWire, no. 21, Nov- 2002. [49] NIST/SEMATECH, “NIST/SEMATECH e-Handbook of Statistical Methods,” www.itl.nist.gov, 01-Apr-2012. [Online]. Available: http://www.itl.nist.gov/div898/handbook/. [50] S. Attaway, Matlab, Second Edition: A Practical Introduction to Programming and Problem Solving, 2nd ed. Butterworth-Heinemann, 2011. 169 Appendices Appendix A: getEpochTimeFrame() sub getEpochTimeFrame() { # Author: Joseph D. Khair, 2012-2013 my $startTime = $_[0]; my $stopTime = $_[1]; my $startTomorrow = $_[2]; my @startTime = split(/:/, $startTime); my @stopTime = split(/:/, $stopTime); my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time); my $startTimeEpoch = timelocal(0,$startTime[1],$startTime[0],$mday+$startTomorrow,$mon,$year); my $stopTimeEpoch = timelocal(0,$stopTime[1],$stopTime[0],$mday+$startTomorrow,$mon,$year); if($startTimeEpoch > $stopTimeEpoch) { $stopTimeEpoch = $stopTimeEpoch + 86400; } return($startTimeEpoch,$stopTimeEpoch); } 170 Appendix B: exportGPIO() sub exportGPIO() { # Author: Joseph D. Khair, 2012-2013 my $pin = $_[0]; system("echo ${pin} > /sys/class/gpio/export"); } 171 Appendix C: unexportGPIO() sub unexportGPIO() { # Author: Joseph D. Khair, 2012-2013 my $pin = $_[0]; system("echo ${pin} > /sys/class/gpio/unexport"); } 172 Appendix D: initGPIO() sub initGPIO() { # Author: Joseph D. Khair, 2012-2013 my $pin = $_[0]; my $dir = $_[1]; system("echo ${dir} > /sys/class/gpio/gpio${pin}/direction"); } 173 Appendix E: setGPIO() sub setGPIO() { # Author: Joseph D. Khair, 2012-2013 my $pin = $_[0]; my $val = $_[1]; my $file = ">>/sys/class/gpio/gpio${pin}/value"; system("echo ${val} > /sys/class/gpio/gpio${pin}/value"); } 174 Appendix F: readTemp() sub readTemp() { # Author: Joseph D. Khair, 2012-2013 # To access AIN0 on the P9 header, need to cat the AIN1 file # There is NO AIN0 file in the directory my $pin = $_[0] + 1; my ($a,$B,$res,$temp); $B = 3975; my $readSum = 0; my $numReadings = 20; for(my $i=0;$i < $numReadings;$i++) { open(AINFILE,"</sys/devices/platform/omap/tsc/ain${pin}"); my $read = <AINFILE>; close(AINFILE); $readSum = $readSum + $read; } my $readAvg = $readSum/$numReadings; # The ADC level provided by the board assumes a max V of 1.8V # Since I am using the 3.3V line with a voltage divider, the max # V I can get is 1.65V, hence, I must do a conversion. This is # lumped together with a conversion to a 1-1024 scale used by the # sensor, rather than the 0-4096 scale used by the board. my $a = ($readAvg / 4096) * 1.8 * (1.8/1.65*2); my $aScaled = ($a / 3.29) * 1024; $res = ((1023 - $aScaled) * 10000) / $aScaled; my $valPrecise = 1/( ( log($res/10000) / $B ) + 1/298.15) - 273.15; my $val = sprintf("% 0.3f",$valPrecise); return($val); } 175 Appendix G: BB.pl # BB.pl # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output file for writing cooler status and temperature reading each cycle. # 2. Export, initialize, set GPIO pin the first time. # 3. Wait until test start time (defined in variable def section). # 4. Read n samples from BeagleBone AIN (analog input from temp sensor) # 5. Average n samples to reduce noise in measurement # 6. Every second compare average measurement to threshold # 7. If temp > threshold CLOSE relay (turn on the cooler) using # BeagleBone GPIO control pin and leave the relay CLOSED until the # temp reading is below threshold. # 8. Else, if temp < threshold OPEN relay (turn off the cooler) using # BeagleBone GPIO control pin and leave the relay OPEN until the # temp reading is above threshold. # 9. Write data to output file. # 10. Continue steps 4-9 until the stop time has been reached and exit # 11. Upon exit, make sure to turn off cooler, uninitialize GPIO pin and # close data output file. # Note: Step 4 is transmission “down” & commands sent in steps 7 or 8 are # transmissions “up.” ###################### # Included libraries # ###################### use strict; use Time::Local; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; #################### # Define variables # #################### my $outPin = 45; my $temp; my $threshold = 26; # in C my $tolerance = 1; # in C my $startTime = "18:20"; # 24 hour clock my $stopTime = "20:20"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ # Archive old data file that may be present if(-e 'dataLeft.txt') { system("mv dataLeft.txt dataLeft.txt.old"); } # Open/prepare output file for writing test data/TLM open(LEFTOUTFILE, '>>dataLeft.txt'); # Export GPIO exportGPIO($outPin); # Initialize GPIO direction & starting value 176 initGPIO($outPin,"out"); setGPIO($outPin,0); # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } print "\n"; # Poll temp sensor every 1 second. If temp > threshold # turn on cooler (set GPIO 45 hi). Turn off the cooler when the # temp goes below the threshold. my $i = 0; my $currentStatus = 0; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch)) { $i++; $temp = readTemp(0); my $currentTime = time; my $printTime = localtime $currentTime; if(($temp > ($threshold+$tolerance)) && !($currentStatus)) { $currentStatus = 1; setGPIO($outPin,1); } elsif(($temp < ($threshold-$tolerance)) && $currentStatus) { $currentStatus = 0; setGPIO($outPin,0); } print LEFTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\n"; sleep(1); $time = time; } # Test is over. Open relay, powering off all connected equipment and make one last entry # in the data output file. $currentStatus = 0; setGPIO($outPin,0); unexportGPIO($outPin); $i++; sleep(1); my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print LEFTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\n"; close(LEFTOUTFILE); 177 Appendix H: PID.pl # PID.pl # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output files for writing cooler status, temperature reading & PID info each # cycle. # 2. export, initialize, set GPIO pin the first time. # 3. Wait until test start time (defined in variable def section). # 4. Read n samples from BeagleBone AIN (analog input from temp sensor) # 5. Average n samples to reduce noise in measurement # 6. Each second ($PIDsampleTime) receive a temperature sensor transmission. # 7. Use #6 transmission in a PID loop on the ground for processing. # 8. As required (by error value from PID logic), transmit relay command “up” as # necessary. # 9. Write data to output file. # 10. Continue steps 4-9 until the stop time has been reached and exit # 11. Upon exit, make sure to turn off cooler, uninitialize GPIO pin and # close data output file. # Note: Step 6 is transmission “down” & commands sent in steps 8 are # transmissions “up.” ###################### # Included libraries # ###################### use strict; use Time::Local; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; #################### # Define variables # #################### my $outPin = 45; my $temp; my $threshold = 29; # in C my $tolerance = 1; # in C my $startTime = "02:50"; # 24 hour clock my $stopTime = "04:20"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ # Archive old data file that may be present if(-e 'dataRightPID1.txt') { system("mv dataRightPID1.txt dataRightPID1.txt.old"); } if(-e 'dataRightPID2.txt') { system("mv dataRightPID2.txt dataRightPID2.txt.old"); } # Export GPIO exportGPIO($outPin); # Initialize GPIO direction & starting value initGPIO($outPin,"out"); 178 setGPIO($outPin,0); # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } print "\n"; # Enter PID loop. Each second, receive a temp sample from onboard across the “link.” # Process this data on the “ground” using the PID controller, and transmit back the # necessary relay commands. All the while, record the happenings to file for later use # in creating a model. Note that Kp, Ki, and Kd values have been determined using a # modified Ziegler-Nichols method. my $i = 0; my $currentStatus = 0; my $error = 0; my $prevError = 0; my $integral = 0; my $derivative = 0; my $setpoint = $threshold; my $output; my $Kp = 36; # L=3, T=90 these values just oscillated..... my $Ki = 0.06; my $Kd = 540; my $PIDsampleTime = 1; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch)) { my $prevTime = $time; sleep($PIDsampleTime); $time = time; my $currentTime = time; my $printTime = localtime $currentTime; my $deltaTime; $i++; $temp = readTemp(0); $prevError = $error; $error = $setpoint - $temp; if($time == $prevTime) {$deltaTime = 1;} else {$deltaTime = $time - $prevTime;} $integral = $integral + ($error * ($deltaTime)); $derivative = ($error - $prevError)/($deltaTime); $output = $Kp*$error + $Ki*$integral + $Kd*$derivative; if($output > 0) { $currentStatus = 0; setGPIO($outPin,0); } elsif($output < 0) { $currentStatus = 1; setGPIO($outPin,1); } # Open/prepare output file for writing test data/TLM open(RIGHTOUTFILE1, '>>dataRightPID1.txt'); open(RIGHTOUTFILE2, '>>dataRightPID2.txt'); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print RIGHTOUTFILE2 "${i}\t${currentTime}\t${temp}\t${currentStatus}\t${error}\t${integral}\t${derivat ive}\t${output}\n"; 179 print "${i}\t${printTime}\t${temp}\t${currentStatus}\t${error}\t${integral}\t${derivativ e}\t${output}\n"; close(RIGHTOUTFILE1); close(RIGHTOUTFILE2); } # Test is over. Open relay, powering off all connected equipment and make one last # entry in the data output file. open(RIGHTOUTFILE1, '>>dataRightPID1.txt'); open(RIGHTOUTFILE2, '>>dataRightPID2.txt'); $currentStatus = 0; setGPIO($outPin,0); unexportGPIO($outPin); $i++; sleep($PIDsampleTime); my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print RIGHTOUTFILE2 "${i}\t${currentTime}\t${temp}\t${currentStatus}\t${error}\t${integral}\t${derivat ive}\t${output}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t${error}\t${integral}\t${derivativ e}\t${output}\n"; close(RIGHTOUTFILE1); close(RIGHTOUTFILE2); 180 Appendix I: mbtBB.pl # mbtBB.pl (no model updates) # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output file for writing cooler status and temperature reading # each cycle. # 2. Export, initialize, set GPIO pin the first time. # 3. Open model test file & read in model (previously recorded on/off # cycle durations and associated epoch cycle start times). # 4. Wait until test start time (defined in variable def section). # 5. Execute n cycles of traditional data transmission to stabilize the system. # 6. Use the model to control the system. Every second, compare the # time to the list of cycle start times. If a match is found CLOSE # the relay to turn on the cooler and begin the cycle. # 7. Wait for the number of seconds found in the model as the on time # for that cycle. # 8. When the on time for that cycle expires, OPEN the relay to turn # off the cooler. # 9. Wait for the number of seconds found in the model as the off time # for that cycle. # 10. Write data to output file. # 11. Continue steps 6-10 until the stop time has been reached or no model # data remains and exit # 12. Upon exit, make sure to turn off cooler, uninitialize GPIO pin and # close data output file. ###################### # Included libraries # ###################### use strict; use Time::Local; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; sub modelRead; #################### # Define variables # #################### my $outPin = 45; my $temp; my %model; my ($modelStartTime,$modelStopTime,$maxSample); my $startTime = "11:10"; # 24 hour clock my $stopTime = "12:10"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ # Archive old data file that may be present if(-e 'dataRightPID1.txt') { system("mv dataRightPID1.txt dataRightPID1.txt.old"); } # Open/prepare output file for writing test data/TLM open(RIGHTOUTFILE1, '>>dataRightPID1.txt'); 181 # Export GPIO exportGPIO($outPin); # Initialize GPIO direction & starting value initGPIO($outPin,"out"); setGPIO($outPin,0); # Read modelRight.txt and gather information ($modelStartTime,$modelStopTime,$maxSample) = modelRead("modelRight.txt"); print "\n\n$modelStartTime\t\t$modelStopTime\t$maxSample\n\n"; # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } print "\n"; my $i = 0; my $currentStatus = 0; print "\n\nNow starting MBT!!!\n\n"; print "\n\nNow starting MBT!!!\n\n"; print "\n\nNow starting MBT!!!\n\n"; # Start MBT using pulse train from ingested model from captured run my $i = 1; $currentStatus = 0; my $modelPosition = 1; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch) && $modelPosition < $maxSample) { setGPIO($outPin,1); $currentStatus = 1; my $hiTimeRemaining = $model{$modelPosition}{onduration}; while($hiTimeRemaining > 0) { $hiTimeRemaining--; $i++; my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\n"; sleep(1); $time = time; } setGPIO($outPin,0); $currentStatus = 0; my $loTimeRemaining = $model{$modelPosition}{offduration}; while($loTimeRemaining > 0) { $loTimeRemaining--; $i++; my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\n"; sleep(1); $time = time; } 182 $modelPosition++; } # Test is over. Open relay, powering off all connected equipment and make one last # entry in the data output file. $currentStatus = 0; setGPIO($outPin,0); unexportGPIO($outPin); $i++; sleep(1); my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\n"; close(RIGHTOUTFILE1); ################ # Sub-routines # ################ #------------------------------------------------------------------- sub modelRead() { my $fileName = $_[0]; my ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration); my ($modelStartTimeTemp,$modelStopTimeTemp,$maxSampleTemp); open(MODEL, $fileName) or die; my $z = 0; while (<MODEL>) { chomp; $z++; ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration) = split("\t"); $model{$modelSample+0}{epoch} = $modelEpoch+0; $model{$modelSample+0}{onduration} = $modelOnDuration+0; $model{$modelSample+0}{offduration} = $modelOffDuration+0; # Save the first time stamp in the file for later use. if($z == 1) {$modelStartTimeTemp = $modelEpoch}; $modelStopTimeTemp = $modelEpoch; $maxSampleTemp = $modelSample; } close(MODEL); return($modelStartTimeTemp, $modelStopTimeTemp, $maxSampleTemp); } #------------------------------------------------------------------- 183 Appendix J: acmbt1BB.pl # acmbt1BB.pl (ACMBT: Phase Shift Algorithm Only) # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output file for writing cooler status and temperature reading # each cycle. # 2. Export, initialize, set GPIO pin the first time. # 3. Open model test file & read in model (previously recorded on/off # cycle durations and associated epoch cycle start times). # 4. Wait until test start time (defined in variable def section). # 5. Execute n cycles of traditional data transmission to stabilize the system. # 6. Use the model to control the system (“transmit the model”). # 7. At the designated cycle start times, turn on the cooler. # 8. Wait for the number of seconds found in the model as the on time # for that cycle. # 9. When the on time for that cycle expires, OPEN the relay to turn # off the cooler. # 10. Wait for the number of seconds found in the model as the off time # for that cycle. # 11. While waiting, determine if a model update is required. If so, request one. # Allow the “ground” to perform one using the Phase Shift Algorithm. # 12. Make sure all data is written to file each cycle for later analysis. # 13. Continue steps 7-12 until the stop time has been reached or no model # data remains and exit. # 14. Upon exit, make sure to turn off cooler, uninitialize GPIO pin and # close data output file. # Note: Model transmission in step 6, and model updates in step 11 are consider # transmissions “up.” Sensor data received as input for the updates in step 11 # are consider transmissions “down.” ###################### # Included libraries # ###################### use strict; use Time::Local; use POSIX; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; sub modelRead; sub modelUpdate; #################### # Define variables # #################### my $outPin = 45; my $temp; my $modelTolerance = 0.2; # in C, delta allowed for each sample comparison my %model; my ($modelStartTime,$modelStopTime,$maxSample); my $startTime = "19:10"; # 24 hour clock my $stopTime = "21:10"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ # Archive old data file that may be present 184 if(-e 'dataRight.txt') { system("mv dataRight.txt dataRight.txt.old"); } # Open/prepare output file for writing test data/TLM open(RIGHTOUTFILE, '>>dataRight.txt'); # Export GPIO exportGPIO($outPin); # Initialize GPIO direction & starting value initGPIO($outPin,"out"); setGPIO($outPin,0); # Read model.txt and gather information starting information ($modelStartTime,$modelStopTime,$maxSample) = modelRead("modelRight.txt"); print "\n\n$modelStartTime\t\t$modelStopTime\t$maxSample\n\n"; # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } # Use traditional data transmission for the first 3 on/off cycles before transitioning to # ACMBT my $threshold = 29; # in C my $tolerance = 1; # in C my $k = 0; my $currentStatus = 0; my $previousStatus = 0; my $cycle = 0; while(($time >= $startTimeEpoch) && ($cycle < 4)) { $k++; $temp = readTemp(0); $previousStatus = $currentStatus; my $currentTime = time; my $printTime = localtime $currentTime; if(($temp > ($threshold+$tolerance)) && !($currentStatus)) { $currentStatus = 1; } elsif(($temp < ($threshold-$tolerance)) && $currentStatus) { $currentStatus = 0; } if(!($previousStatus) && $currentStatus) {$cycle++;} if($cycle < 4) { setGPIO($outPin,$currentStatus); print RIGHTOUTFILE "${k}\t${currentTime}\t${temp}\t${currentStatus}\t0\n"; print "${k}\t${printTime}\t${temp}\t${currentStatus}\t0\n"; } sleep(1); $time = time; } print "\n\nNow starting ACMBT!!!\n\n"; print "\n\nNow starting ACMBT!!!\n\n"; print "\n\nNow starting ACMBT!!!\n\n"; # Start ACMBT. 185 my $i = $k-1; my $currentTime = time; my $startTimeModelControl = $currentTime; $currentStatus = 0; my $numberOfTimesModelChanged = 0; my %newModel; my $adjTime = $startTimeModelControl - $modelStartTime; my ($modelTempAtLastOnTime,$tempAtLastOnTime,$modelTempAtLastOffTime,$tempAtLastOffTi me,$delta); my ($timeAtLastOnTime,$timeAtLastOffTime,$totalCycleTimeLastCycle,$nextOnTime); my $modelPosition = 1; # Bring Model up to current time by calculating the difference between the first model # time and the time now, and adding that number of seconds to each cycle time in stored # model. This is done in a simple foreach loop. foreach my $key (sort keys %model) { $model{$key}{epoch} = $model{$key}{epoch} + $adjTime; } $nextOnTime = $model{$modelPosition}{epoch}; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch) && $modelPosition < $maxSample) { $i++; my $modelUpdateLockout = 0; $temp = readTemp(0); $previousStatus = $currentStatus; $currentTime = time; my $printTime = localtime $currentTime; if($currentTime >= $nextOnTime) { setGPIO($outPin,1); $currentStatus = 1; $tempAtLastOnTime = $temp; $timeAtLastOnTime = $currentTime; } my $hiTimeRemaining = $model{$modelPosition}{onduration}; while($hiTimeRemaining > 0) { $hiTimeRemaining--; $i++; $currentTime = time; $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; sleep(1); $time = time; } setGPIO($outPin,0); $currentStatus = 0; $nextOnTime = $model{$modelPosition+1}{epoch}; my $holdUntil = localtime $nextOnTime; print "\n\n$holdUntil\n\n"; $currentTime = time; $tempAtLastOffTime = $temp; $timeAtLastOffTime = $currentTime; while($time < $nextOnTime) { 186 $i++; $currentTime = time; $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; # Update the model here if needed while waiting for the next cooling cycle. if((($tempAtLastOnTime > 30.1) && ($tempAtLastOffTime > 28.1) && !$modelUpdateLockout) || (($tempAtLastOnTime < 29.9) && ($tempAtLastOffTime < 27.9) && !$modelUpdateLockout)) { $totalCycleTimeLastCycle = $timeAtLastOffTime - $timeAtLastOnTime; print "\ntotalCycleTimeLastCycle = ${totalCycleTimeLastCycle}\n"; my $totalChangeInTempLastCycle = $tempAtLastOnTime - $tempAtLastOffTime; print "totalChangeInTempLastCycle = ${totalChangeInTempLastCycle}\n"; my $rateOfChange = $totalChangeInTempLastCycle/$totalCycleTimeLastCycle; print "rateOfChange = ${rateOfChange}\n"; $delta = ((($tempAtLastOnTime-30)+($tempAtLastOffTime-28))/2)/$rateOfChange; $delta = floor(0.5*int($delta + $delta/abs($delta*2))); print "Shift this many seconds = ${delta}\n\n"; $numberOfTimesModelChanged++; %model = modelUpdate($delta,$currentTime,$modelStartTime); $nextOnTime = $model{$modelPosition+1}{epoch}; $modelUpdateLockout = 1; } sleep(1); $time = time; } $modelPosition++; sleep(1); $time = time; } # Test is over. Open relay, powering off all connected equipment and make one last # entry in the data output file. $currentStatus = 0; setGPIO($outPin,0); unexportGPIO($outPin); $i++; sleep(1); $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; close(RIGHTOUTFILE); ################ # Sub-routines # ################ #------------------------------------------------------------------- sub modelRead() { my $fileName = $_[0]; my ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration); my ($modelStartTimeTemp,$modelStopTimeTemp,$maxSampleTemp); open(MODEL, $fileName) or die; my $z = 0; while (<MODEL>) { chomp; 187 $z++; ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration) = split("\t"); $model{$modelSample+0}{epoch} = $modelEpoch+0; $model{$modelSample+0}{onduration} = $modelOnDuration+0; $model{$modelSample+0}{offduration} = $modelOffDuration+0; # Save the first time stamp in the file for later use. if($z == 1) {$modelStartTimeTemp = $modelEpoch}; $modelStopTimeTemp = $modelEpoch; $maxSampleTemp = $modelSample; } close(MODEL); return($modelStartTimeTemp, $modelStopTimeTemp, $maxSampleTemp); } #------------------------------------------------------------------- sub modelUpdate() { my $averageDeviation = $_[0]; my $currentTime = $_[1]; my $modelStartTime = $_[2]; my %newModel; my $timeShift = $averageDeviation; foreach my $key (sort keys %model) { $newModel{$key}{epoch} = $model{$key}{epoch}; $newModel{$key}{onduration} = $model{$key}{onduration}; $newModel{$key}{offduration} = $model{$key}{offduration}; } foreach my $key (sort keys %model) { $newModel{$key+1}{epoch} = $model{$key+1}{epoch}-$timeShift; $newModel{$key}{offduration} = $model{$key}{offduration}+$timeShift; } return(%newModel); } #------------------------------------------------------------------- 188 Appendix K: acmbt1PID.pl # acmbt1PID.pl (ACMBT: Average-based Cycle Priority Substitution Algorithm Only) # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output file for writing cooler status and temperature reading # each cycle. # 2. Export, initialize, set GPIO pin the first time. # 3. Open model test file & read in model (previously recorded on/off # cycle durations and associated epoch cycle start times). # 4. Wait until test start time (defined in variable def section). # 5. Execute n cycles of traditional data transmission to stabilize the system. # 6. Use the model to control the system (“transmit the model”). # 7. Update a running average of cooler temperature. Use this value to determine # if a model update is required. If so, request and perform one. # 8. At the designated cycle start times, turn on the cooler. # 9. Wait for the number of seconds found in the model as the on time # for that cycle. # 10. When the on time for that cycle expires, OPEN the relay to turn # off the cooler. # 11. Wait for the number of seconds found in the model as the off time # for that cycle. # 12. Make sure all data is written to file each cycle for later analysis. # 13. Continue steps 7-12 until the stop time has been reached or no model # data remains and exit. # 14. Upon exit, make sure to turn off cooler, uninitialize GPIO pin and # close data output file. # Note: Model transmission in step 6, and model updates in step 7 are consider # transmissions “up.” Sensor data received as input for the updates in step 7 # are consider transmissions “down.” ###################### # Included libraries # ###################### use strict; use Time::Local; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; sub modelRead; #################### # Define variables # #################### my $outPin = 45; my $temp; my %model; my ($modelStartTime,$modelStopTime,$maxSample); my $startTime = "04:40"; # 24 hour clock my $stopTime = "06:10"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ # Archive old data file that may be present if(-e 'dataRightPID1.txt') { system("mv dataRightPID1.txt dataRightPID1.txt.old"); 189 } if(-e 'dataRightPID2.txt') { system("mv dataRightPID2.txt dataRightPID2.txt.old"); } # Open/prepare output file for writing test data/TLM open(RIGHTOUTFILE1, '>>dataRightPID1.txt'); open(RIGHTOUTFILE2, '>>dataRightPID2.txt'); # Export GPIO exportGPIO($outPin); # Initialize GPIO direction & starting value initGPIO($outPin,"out"); setGPIO($outPin,0); # Read modelRight.txt and gather information ($modelStartTime,$modelStopTime,$maxSample) = modelRead("modelRight.txt"); print "\n\n$modelStartTime\t\t$modelStopTime\t$maxSample\n\n"; # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } print "\n"; # Use traditional data transmission for the first 5 on/off cycles # before transitioning to ACMBT my $i = 0; my $currentStatus = 0; my $previousStatus = 0; my $error = 0; my $prevError = 0; my $integral = 0; my $derivative = 0; my $threshold = 29; # in C my $setpoint = $threshold; my $output; my $Kp = 36; # L=3, T=90 these values just oscillated..... my $Ki = 0.06; my $Kd = 540; my $PIDsampleTime = 1; my $tolerance = 1; # in C my $k = 0; my $cycle = 0; while(($time >= $startTimeEpoch) && ($cycle < 6)) { $k++; $previousStatus = $currentStatus; my $prevTime = $time; sleep($PIDsampleTime); $time = time; my $currentTime = time; my $printTime = localtime $currentTime; my $deltaTime; $temp = readTemp(0); $prevError = $error; $error = $setpoint - $temp; 190 if($time == $prevTime) {$deltaTime = 1;} else {$deltaTime = $time - $prevTime;} $integral = $integral + ($error * ($deltaTime)); $derivative = ($error - $prevError)/($deltaTime); $output = $Kp*$error + $Ki*$integral + $Kd*$derivative; if($output > 0) { $currentStatus = 0; setGPIO($outPin,0); } elsif($output < 0) { $currentStatus = 1; setGPIO($outPin,1); } if(!($previousStatus) && $currentStatus) {$cycle++;} if($cycle < 6) { setGPIO($outPin,$currentStatus); print RIGHTOUTFILE1 "${k}\t${currentTime}\t${temp}\t${currentStatus}\t0\n"; print RIGHTOUTFILE2 "${k}\t${currentTime}\t${temp}\t${currentStatus}\t${error}\t${integral}\t${derivat ive}\t${output}\n"; print "${k}\t${printTime}\t${temp}\t${currentStatus}\t${error}\t${integral}\t${derivativ e}\t${output}\n"; } } print "\n\nNow starting ACMBT!!!\n\n"; print "\n\nNow starting ACMBT!!!\n\n"; print "\n\nNow starting ACMBT!!!\n\n"; # Start ACMBT. my $i = $k-1; $currentStatus = 0; my $modelPosition = 1; my $threshold = 29; my $tolerance = 0.15; my $readCnt = 0; my $totalTemp = 0; my $slidingWindowAvgCount = 15; my $modelUpdateLockout = 0; my ($hiTimeRemaining,$loTimeRemaining); my $numberOfTimesModelChanged = 0; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch) && $modelPosition < $maxSample) { if($modelPosition > 20) {$modelUpdateLockout = 1;} $temp = readTemp(0); $totalTemp = $totalTemp + $temp; $readCnt++; if($readCnt >= $slidingWindowAvgCount && $modelUpdateLockout) { my $avgTemp = $totalTemp / $readCnt; if($avgTemp < ($threshold-$tolerance)) { $hiTimeRemaining = 0; $loTimeRemaining = $model{$modelPosition}{onduration} + $model{$modelPosition}{offduration}; $numberOfTimesModelChanged++; print "\n\nAverage Temp = $avgTemp, Read Count = $readCnt\n"; print "Old hiTime = $model{$modelPosition}{onduration}, New = 0\n"; print "Old loTime = $model{$modelPosition}{offduration}, New = $loTimeRemaining\n\n"; } 191 elsif($avgTemp > ($threshold+$tolerance)) { $loTimeRemaining = 0; $hiTimeRemaining = $model{$modelPosition}{onduration} + $model{$modelPosition}{offduration}; $numberOfTimesModelChanged++; print "\n\nAverage Temp = $avgTemp, Read Count = $readCnt\n"; print "Old loTime = $model{$modelPosition}{offduration}, New = 0\n"; print "Old hiTime = $model{$modelPosition}{onduration}, New = $hiTimeRemaining\n\n"; } print "\n\nAverage Temp = $avgTemp, Read Count = $readCnt\n"; $readCnt = 0; $totalTemp = 0; } else { $hiTimeRemaining = $model{$modelPosition}{onduration}; $loTimeRemaining = $model{$modelPosition}{offduration}; } while($hiTimeRemaining > 0) { setGPIO($outPin,1); $currentStatus = 1; $hiTimeRemaining--; $i++; my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; sleep(1); $time = time; } while($loTimeRemaining > 0) { setGPIO($outPin,0); $currentStatus = 0; $loTimeRemaining--; $i++; my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; sleep(1); $time = time; } $modelPosition++; } # Test is over. Open relay, powering off all connected equipment and make one last # entry in the data output file. $currentStatus = 0; setGPIO($outPin,0); unexportGPIO($outPin); $i++; sleep(1); my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE1 "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; 192 close(RIGHTOUTFILE1); close(RIGHTOUTFILE2); ################ # Sub-routines # ################ #------------------------------------------------------------------- sub modelRead() { my $fileName = $_[0]; my ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration); my ($modelStartTimeTemp,$modelStopTimeTemp,$maxSampleTemp); open(MODEL, $fileName) or die; my $z = 0; while (<MODEL>) { chomp; $z++; ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration) = split("\t"); $model{$modelSample+0}{epoch} = $modelEpoch+0; $model{$modelSample+0}{onduration} = $modelOnDuration+0; $model{$modelSample+0}{offduration} = $modelOffDuration+0; # Save the first time stamp in the file for later use. if($z == 1) {$modelStartTimeTemp = $modelEpoch}; $modelStopTimeTemp = $modelEpoch; $maxSampleTemp = $modelSample; } close(MODEL); return($modelStartTimeTemp, $modelStopTimeTemp, $maxSampleTemp); } #------------------------------------------------------------------- 193 Appendix L: acmbt2BB.pl # acmbt2BB.pl (ACMBT: Phase Shift & Amplitude Scaling Algorithms) # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output file for writing cooler status and temperature reading # each cycle. # 2. Export, initialize, set GPIO pin the first time. # 3. Open model test file & read in model (previously recorded on/off # cycle durations and associated epoch cycle start times). # 4. Perform a one-time algorithm-based amplitude scaling of the model prior to use. # 5. Wait until test start time (defined in variable def section). # 6. Execute n cycles of traditional data transmission to stabilize the system. # 7. Use the model to control the system (“transmit the model”). # 8. At the designated cycle start times, turn on the cooler. # 9. Wait for the number of seconds found in the model as the on time # for that cycle. # 10. When the on time for that cycle expires, OPEN the relay to turn # off the cooler. # 11. Wait for the number of seconds found in the model as the off time # for that cycle. # 12. While waiting, determine if a model update is required. If so, request one. # Allow the “ground” to perform one using the Phase Shift Algorithm. # 13. Make sure all data is written to file each cycle for later analysis. # 14. Continue steps 7-12 until the stop time has been reached or no model # data remains and exit. # 15. Upon exit, make sure to turn off cooler, uninitialize GPIO pin and # close data output file. # Note: Model transmission in step 7, and model updates in step 12 are consider # transmissions “up.” Sensor data received as input for the updates in step 12 # are consider transmissions “down.” ###################### # Included libraries # ###################### use strict; use Time::Local; use POSIX; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; sub modelRead; sub modelUpdate; #################### # Define variables # #################### my $outPin = 45; my $temp; my $modelTolerance = 0.2; # in C, delta allowed for each sample comparison my %model; my ($modelStartTime,$modelStopTime,$maxSample); my $startTime = "15:35"; # 24 hour clock my $stopTime = "17:35"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ 194 # Archive old data file that may be present if(-e 'dataRight.txt') { system("mv dataRight.txt dataRight.txt.old"); } # Open/prepare output file for writing test data/TLM open(RIGHTOUTFILE, '>>dataRight.txt'); # Export GPIO exportGPIO($outPin); # Initialize GPIO direction & starting value initGPIO($outPin,"out"); setGPIO($outPin,0); # Read model and gather information starting information ($modelStartTime,$modelStopTime,$maxSample) = modelRead("modelRight.txt"); print "\n\n$modelStartTime\t\t$modelStopTime\t$maxSample\n\n"; # Find slope of linear regression line of on/off duration and modify model # accordingly my $avgFirst3OnTimes = ($model{1}{onduration} + $model{2}{onduration} + $model{3}{onduration})/3; my $avgLast3OnTimes = ($model{$maxSample-3}{onduration} + $model{$maxSample- 2}{onduration} + $model{$maxSample-1}{onduration})/3; my $onTimeMultiplier = (($avgLast3OnTimes - $avgFirst3OnTimes)/$avgFirst3OnTimes) + 1; my $avgFirst3OffTimes = ($model{1}{offduration} + $model{2}{offduration} + $model{3}{offduration})/3; my $avgLast3OffTimes = ($model{$maxSample-3}{offduration} + $model{$maxSample- 2}{offduration} + $model{$maxSample-1}{offduration})/3; my $offTimeMultiplier = (($avgLast3OffTimes - $avgFirst3OffTimes)/$avgFirst3OffTimes) + 1; print "$avgFirst3OnTimes\t$avgLast3OnTimes\t$onTimeMultiplier\n\n"; print "$avgFirst3OffTimes\t$avgLast3OffTimes\t$offTimeMultiplier\n\n"; foreach my $key (sort keys %model) { $model{$key}{onduration} = int(($model{$key}{onduration} * $onTimeMultiplier) + ($model{$key}{onduration} * $onTimeMultiplier)/abs(($model{$key}{onduration} * $onTimeMultiplier)*2)); $model{$key}{offduration} = int(($model{$key}{offduration} * $offTimeMultiplier) + ($model{$key}{offduration} * $offTimeMultiplier)/abs(($model{$key}{offduration} * $offTimeMultiplier)*2)); if($key>1) { $model{$key}{epoch} = $model{$key-1}{epoch} + $model{$key-1}{onduration} + $model{$key-1}{offduration}; } print "$key\t$model{$key}{epoch}\t$model{$key}{onduration}\t$model{$key}{offduration}\n" ; } # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } # Use traditional data transmission for the first 3 on/off cycles # before transitioning to ACMBT my $threshold = 29; # in C 195 my $tolerance = 1; # in C my $k = 0; my $currentStatus = 0; my $previousStatus = 0; my $cycle = 0; while(($time >= $startTimeEpoch) && ($cycle < 4)) { $k++; $temp = readTemp(0); $previousStatus = $currentStatus; my $currentTime = time; my $printTime = localtime $currentTime; if(($temp > ($threshold+$tolerance)) && !($currentStatus)) { $currentStatus = 1; } elsif(($temp < ($threshold-$tolerance)) && $currentStatus) { $currentStatus = 0; } if(!($previousStatus) && $currentStatus) {$cycle++;} if($cycle < 4) { setGPIO($outPin,$currentStatus); print RIGHTOUTFILE "${k}\t${currentTime}\t${temp}\t${currentStatus}\t0\n"; print "${k}\t${printTime}\t${temp}\t${currentStatus}\t0\n"; } sleep(1); $time = time; } print "\n\nNow starting ACMBT!!!\n\n"; print "\n\nNow starting ACMBT!!!\n\n"; print "\n\nNow starting ACMBT!!!\n\n"; # Start ACMBT. my $i = $k-1; my $currentTime = time; my $startTimeModelControl = $currentTime; $currentStatus = 0; my $numberOfTimesModelChanged = 0; my %newModel; my $adjTime = $startTimeModelControl - $modelStartTime; my ($modelTempAtLastOnTime,$tempAtLastOnTime,$modelTempAtLastOffTime,$tempAtLastOffTi me,$delta); my ($timeAtLastOnTime,$timeAtLastOffTime,$totalCycleTimeLastCycle,$nextOnTime); my $modelPosition = 1; # Bring model up to current time by calculating the difference between the first # model time and the time now, and adding that number of seconds to each cycle # time in stored model. This is done in a simple foreach loop. foreach my $key (sort keys %model) { $model{$key}{epoch} = $model{$key}{epoch} + $adjTime; } $nextOnTime = $model{$modelPosition}{epoch}; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch) && $modelPosition < $maxSample) { $i++; my $modelUpdateLockout = 0; $temp = readTemp(0); $previousStatus = $currentStatus; $currentTime = time; my $printTime = localtime $currentTime; 196 if($currentTime >= $nextOnTime) { setGPIO($outPin,1); $currentStatus = 1; $tempAtLastOnTime = $temp; $timeAtLastOnTime = $currentTime; } my $hiTimeRemaining = $model{$modelPosition}{onduration}; while($hiTimeRemaining > 0) { $hiTimeRemaining--; $i++; $currentTime = time; $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; sleep(1); $time = time; } setGPIO($outPin,0); $currentStatus = 0; $nextOnTime = $model{$modelPosition+1}{epoch}; my $holdUntil = localtime $nextOnTime; print "\n\n$holdUntil\n\n"; $currentTime = time; $tempAtLastOffTime = $temp; $timeAtLastOffTime = $currentTime; while($time < $nextOnTime) { $i++; $currentTime = time; $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; # Update the model here if needed!!! while waiting for the next cooling cycle if((($tempAtLastOnTime > 30.1) && ($tempAtLastOffTime > 28.1) && !$modelUpdateLockout) || (($tempAtLastOnTime < 29.9) && ($tempAtLastOffTime < 27.9) && !$modelUpdateLockout)) { $totalCycleTimeLastCycle = $timeAtLastOffTime - $timeAtLastOnTime; print "\ntotalCycleTimeLastCycle = ${totalCycleTimeLastCycle}\n"; my $totalChangeInTempLastCycle = $tempAtLastOnTime - $tempAtLastOffTime; print "totalChangeInTempLastCycle = ${totalChangeInTempLastCycle}\n"; my $rateOfChange = $totalChangeInTempLastCycle/$totalCycleTimeLastCycle; print "rateOfChange = ${rateOfChange}\n"; $delta = ((($tempAtLastOnTime-30)+($tempAtLastOffTime-28))/2)/$rateOfChange; $delta = floor(0.5*int($delta + $delta/abs($delta*2))); print "Shift this many seconds = ${delta}\n\n"; $numberOfTimesModelChanged++; %model = modelUpdate($delta,$currentTime,$modelStartTime); $nextOnTime = $model{$modelPosition+1}{epoch}; $modelUpdateLockout = 1; } sleep(1); $time = time; } 197 $modelPosition++; sleep(1); $time = time; } # Test is over. Open relay, powering off all connected equipment and make one last # entry in the data output file. $currentStatus = 0; setGPIO($outPin,0); unexportGPIO($outPin); $i++; sleep(1); $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print RIGHTOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\t$numberOfTimesModelChanged\n"; close(RIGHTOUTFILE); ################ # Sub-routines # ################ #------------------------------------------------------------------- sub modelRead() { my $fileName = $_[0]; my ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration); my ($modelStartTimeTemp,$modelStopTimeTemp,$maxSampleTemp); open(MODEL, $fileName) or die; my $z = 0; while (<MODEL>) { chomp; $z++; ($modelSample, $modelEpoch, $modelOnDuration, $modelOffDuration) = split("\t"); $model{$modelSample+0}{epoch} = $modelEpoch+0; $model{$modelSample+0}{onduration} = $modelOnDuration+0; $model{$modelSample+0}{offduration} = $modelOffDuration+0; # Save the first time stamp in the file for later use. if($z == 1) {$modelStartTimeTemp = $modelEpoch}; $modelStopTimeTemp = $modelEpoch; $maxSampleTemp = $modelSample; } close(MODEL); return($modelStartTimeTemp, $modelStopTimeTemp, $maxSampleTemp); } #------------------------------------------------------------------- sub modelUpdate() { my $averageDeviation = $_[0]; my $currentTime = $_[1]; my $modelStartTime = $_[2]; my %newModel; my $timeShift = $averageDeviation; foreach my $key (sort keys %model) { $newModel{$key}{epoch} = $model{$key}{epoch}; $newModel{$key}{onduration} = $model{$key}{onduration}; $newModel{$key}{offduration} = $model{$key}{offduration}; 198 } foreach my $key (sort keys %model) { $newModel{$key+1}{epoch} = $model{$key+1}{epoch}-$timeShift; $newModel{$key}{offduration} = $model{$key}{offduration}+$timeShift; } return(%newModel); } #------------------------------------------------------------------- 199 Appendix M: heaterControl.pl # heaterControl.pl # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output file for writing heater status and temperature reading each cycle. # 2. Export, initialize, set GPIO pin(s) the first time. # 3. Wait until test start time (defined in variable def section). # 4. Read n samples from BeagleBone AIN (analog input from temp sensor) # 5. Average n samples to reduce noise in measurement # 6. Every second compare average measurement to threshold # 7. If temp < threshold CLOSE relay (turn on the heater) using # BeagleBone GPIO control pin and leave the relay CLOSED until the # temp reading is above threshold. # 8. Else, if temp > threshold OPEN relay (turn off the heater) using # BeagleBone GPIO control pin and leave the relay OPEN until the # temp reading is below threshold. # 9. Write data to output file. # 10. Continue steps 4-9 until the stop time has been reached and exit # 11. Upon exit, make sure to turn off heater(s), uninitialize GPIO pin(s) and # close data output file. ###################### # Included libraries # ###################### use strict; use Time::Local; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; #################### # Define variables # #################### my $outPin1 = 45; my $outPin2 = 47; my $temp; my $threshold = 36; # in C my $tolerance = 0.05; # in C my $startTime = "20:40"; # 24 hour clock my $stopTime = "22:10"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ # Archive old data file that may be present if(-e 'dataPlateTemp.txt') { system("mv dataPlateTemp.txt dataPlateTemp.txt.old"); } # Export GPIO exportGPIO($outPin1); exportGPIO($outPin2); # Initialize GPIO direction & value initGPIO($outPin1,"out"); setGPIO($outPin1,0); 200 initGPIO($outPin2,"out"); setGPIO($outPin2,0); # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } print "\n"; # Poll temp sensor every second. If temp < threshold # turn on heaters (set GPIO 45 & 47 hi). Turn off the heater when the # temp goes above the threshold. my $i = 0; my $currentStatus = 0; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch)) { $i++; $temp = readTemp(0); #$temp = $temp * 1.8 + 32; my $currentTime = time; my $printTime = localtime $currentTime; if(($temp < ($threshold-$tolerance)) && !($currentStatus)) { $currentStatus = 1; setGPIO($outPin1,1); setGPIO($outPin2,1); } elsif(($temp > ($threshold+$tolerance)) && $currentStatus) { $currentStatus = 0; setGPIO($outPin1,0); setGPIO($outPin2,0); } # Open/prepare output file for writing test data/TLM open(PLATEOUTFILE, '>>dataPlateTemp.txt'); print PLATEOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\n"; sleep(1); close(PlATEOUTFILE); $time = time; } # Test is over. Open relays, powering off all connected equipment and make one last # entry in the data output file. open(PLATEOUTFILE, '>>dataPlateTemp.txt'); $currentStatus = 0; setGPIO($outPin1,0); setGPIO($outPin2,0); unexportGPIO($outPin1); unexportGPIO($outPin2); $i++; sleep(1); my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print PlATEOUTFILE "${i}\t${currentTime}\t${temp}\t${currentStatus}\n"; print "${i}\t${printTime}\t${temp}\t${currentStatus}\n"; close(PlATEOUTFILE); 201 Appendix N: roomTempRecord.pl # roomTempRecord.pl # Author: Joseph D. Khair, 2012-2013 # Objectives: # 1. Open output file for writing room temperature reading each cycle. # 2. Wait until test start time (defined in variable def section). # 3. Read n samples from BeagleBone AIN (analog input from temp sensor) # 4. Average n samples to reduce noise in measurement # 5. Write data to output file. # 6. Continue steps 3-5 until the stop time has been reached and exit # 7. Upon exit, close data output file. ###################### # Included libraries # ###################### use strict; use Time::Local; ######################## # Define internal subs # ######################## sub getEpochTimeFrame; sub exportGPIO; sub unexportGPIO; sub initGPIO; sub setGPIO; sub readTemp; #################### # Define variables # #################### my $temp; my $startTime = "20:40"; # 24 hour clock my $stopTime = "22:10"; # 24 hour clock my ($startTimeEpoch,$stopTimeEpoch) = getEpochTimeFrame($startTime,$stopTime); ################ # Main program # ################ # Archive old data file that may be present if(-e 'dataRoomTemp.txt') { system("mv dataRoomTemp.txt dataRoomTemp.txt.old"); } # Hold pattern until start time is reached my $time = time; while($time < $startTimeEpoch) { print "waiting for start time...\n"; sleep(1); $time = time; } print "\n"; # Poll temp sensor every second. my $i = 0; while(($time >= $startTimeEpoch) && ($time < $stopTimeEpoch)) { $i++; $temp = readTemp(0); 202 my $currentTime = time; my $printTime = localtime $currentTime; open(TEMPOUTFILE, '>>dataRoomTemp.txt'); print TEMPOUTFILE "${i}\t${currentTime}\t${temp}\n"; print "${i}\t${printTime}\t${temp}\n"; sleep(1); close(TEMPOUTFILE); $time = time; } open(TEMPOUTFILE, '>>dataRoomTemp.txt'); $i++; sleep(1); my $currentTime = time; my $printTime = localtime $currentTime; $temp = readTemp(0); print TEMPOUTFILE "${i}\t${currentTime}\t${temp}\n"; print "${i}\t${printTime}\t${temp}\n"; close(PlATEOUTFILE); 203 Appendix O: dataAnalysisReport.m %% dataAnalysisReport.m % Author: Joseph D. Khair, 2012-2013 % The purpose of this script is to take in the data produced by the test % fixture and stored on BeagleBone Boards and produce a detailed report. % The report will graph the raw test data for a single unit, processed data % that indicates cycle-by-cycle metrics, and other pertinent information. % % There are 3 sets of data written throughout the test that are used as % input to this script. % 1. dataLeft/Right.txt - temperature and cooler status (L/R unit) % 2. dataPlateTemp.txt - temperature of the Aluminum plate (edge temp) % 3. dataRoomTemp.txt - temperature of the room (environment) % Clear the workspace close all; clear all; %% Read in the data by requesting files (full path) from user coolerInFile = input('Location of Cooler unit data file? '); plateInFile = input('Location of Plate temperature data file? '); roomInFile = input('Location of Room temperature data file? '); [coolerData(:,1),coolerData(:,2),coolerData(:,3),coolerData(:,4)] = textread(coolerInFile,'%d %d %f %d'); [plateData(:,1),plateData(:,2),plateData(:,3),plateData(:,4)] = textread(plateInFile,'%d %d %f %d'); [roomData(:,1),roomData(:,2),roomData(:,3)] = textread(roomInFile,'%d %d %f'); %% Ensure that all arrays are the same length. Occasionally, the % BeagleBone will skip a cycle due to the speed (or lack thereof) of the % processor. So, finished arrays can differ in length. startTime = min(roomData(:,2)); stopTime = max(roomData(:,2)); fullSpan = stopTime-startTime; lengthCoolerData = length(coolerData); i = 2; while (i < lengthCoolerData) if(coolerData(i,2) == coolerData(i-1,2)) coolerData = [coolerData(1:i-1,:);[coolerData(i+1:end,1)-1 coolerData(i+1:end,2) coolerData(i+1:end,3) coolerData(i+1:end,4)]]; end i = i+1; lengthCoolerData = length(coolerData); end lengthPlateData = length(plateData); i = 2; while (i < lengthPlateData) if(plateData(i,2) == plateData(i-1,2)) plateData = [plateData(1:i-1,:);[plateData(i+1:end,1)-1 plateData(i+1:end,2) plateData(i+1:end,3) plateData(i+1:end,4)]]; end i = i+1; lengthPlateData = length(plateData); end lengthRoomData = length(roomData); i = 2; while (i < lengthRoomData) if(roomData(i,2) == roomData(i-1,2)) roomData = [roomData(1:i-1,:);[roomData(i+1:end,1)-1 roomData(i+1:end,2) 204 roomData(i+1:end,3) roomData(i+1:end,4)]]; end i = i+1; lengthRoomData = length(roomData); end for i=2:fullSpan+1 if(length(coolerData) > i) if(coolerData(i,2) ~= startTime+(i-1)) coolerData = [coolerData(1:i-1,:);[i coolerData(i-1,2)+1 coolerData(i-1,3) coolerData(i-1,4)];[coolerData(i:end,1)+1 coolerData(i:end,2) coolerData(i:end,3) coolerData(i:end,4)]]; end else coolerData = [coolerData(1:i-1,:);[i coolerData(i-1,2)+1 coolerData(i-1,3) coolerData(i-1,4)]]; end if(length(plateData) > i) if(plateData(i,2) ~= startTime+(i-1)) plateData = [plateData(1:i-1,:);[i plateData(i-1,2)+1 plateData(i-1,3) plateData(i-1,4)];[plateData(i:end,1)+1 plateData(i:end,2) plateData(i:end,3) plateData(i:end,4)]]; end else plateData = [plateData(1:i-1,:);[i plateData(i-1,2)+1 plateData(i-1,3) plateData(i-1,4)]]; end if(length(roomData) > i) if(roomData(i,2) ~= startTime+(i-1)) roomData = [roomData(1:i-1,:);[i roomData(i-1,2)+1 roomData(i- 1,3)];[roomData(i:end,1)+1 roomData(i:end,2) roomData(i:end,3)]]; end else roomData = [roomData(1:i-1,:);[i roomData(i-1,2)+1 roomData(i-1,3)]]; end end %% Plot raw data summary (Single Unit) epochTime = coolerData(:,2); epochTime = ((epochTime - 25200) / 86400) + 25569; dateVecs = datevec(epochTime); adjDateVecs = [dateVecs(:,1)+1900,dateVecs(:,2),dateVecs(:,3)-1,dateVecs(:,4)- 1,dateVecs(:,5),dateVecs(:,6)]; epochTime = datenum(adjDateVecs); [AX,GY1,GY2] = plotyy(epochTime,[coolerData(:,3) plateData(:,3) roomData(:,3)],epochTime,coolerData(:,4)); set(AX(1),'XLim',[epochTime(1) epochTime(length(epochTime))]); minBetweenTicks = 10; linkprop(AX,{'Xlim','XTickLabel','Xtick'}); % Link the limits/labels of both axes axis tight set(AX(1),'XTick',[epochTime(1):minBetweenTicks*30*(epochTime(2)- epochTime(1)):epochTime(length(epochTime))]); datetick(AX(1),'x',15,'keeplimits','keepticks'); rotateXLabels(AX(1),90); set(AX(1),'YLim',[floor(min(roomData(:,3))-2) ceil(max(plateData(:,3))+2)]); set(AX(1),'YTick',[(floor(min(roomData(:,3))-2)):0.5:(ceil(max(plateData(:,3)))+2)]); set(AX(2),'YLim',[-0.2 9.8]); set(AX(2),'YTick',[-0.2:1:9.8]); set(get(AX(1),'Ylabel'),'string','Temperature [C]'); set(get(AX(2),'Ylabel'),'string','Cooler Status [0=off/1=on]','color','black'); 205 set(get(AX(1),'Xlabel'),'string','Time [HH:MM]'); set(AX(2),'YColor','k'); box(AX(1),'off'); set(GY1,'LineStyle','-') set(GY2,'LineStyle','-') set(GY1,'LineWidth',1.1) set(GY2,'LineWidth',1.1) title('Temperature & Cooler Status vs. Time'); legend([GY1;GY2],'Cooler Temp','Plate Temp','Room Temp','Cooler On/Off Status','Location','NorthEast','Orientation','horizontal'); % Add threshold to plot grid on; zoom on; hold on; plot(get(gca,'xlim'), [28 28],'color','black','LineStyle','--','LineWidth',1.8); plot(get(gca,'xlim'), [29 29],'color','black','LineWidth',1.8); plot(get(gca,'xlim'), [30 30],'color','black','LineStyle','--','LineWidth',1.8); hold off % Save png and fig for later set(gcf, 'InvertHardCopy', 'off'); screen_size = get(0, 'ScreenSize'); set(gcf, 'Position', [0 0 screen_size(3) screen_size(4)]); xlabh = get(gca,'XLabel'); set(xlabh,'Position',get(xlabh,'Position') + [0 0.04 0]); saveas(gcf, 'fig1', 'png'); saveas(gcf, 'fig1', 'fig'); %% Calculate the Statistics on a per-cycle basis onTimes = []; offTimes = []; j = 1; k = 1; for i=1:length(coolerData(:,1))-1 if ((coolerData(i+1,4) - coolerData(i,4)) == 1) onTimes(j) = epochTime(i); j = j+1; elseif ((coolerData(i+1,4) - coolerData(i,4)) == -1) offTimes(k) = epochTime(i); k = k+1; end end if(onTimes(1) > offTimes(1)) onTimes = [epochTime(1) onTimes]; else offTimes = [epochTime(1) offTimes]; end onTimeDurations = []; offTimeDurations = []; z = 1; for i=1:(length(onTimes)-1) if(onTimes(1) > offTimes(1)) onTimeDurations(z) = 86400*(onTimes(i) - offTimes(i)); offTimeDurations(z) = 86400*(offTimes(i+1) - onTimes(i)); else onTimeDurations(z) = 86400*(offTimes(i) - onTimes(i)); offTimeDurations(z) = 86400*(onTimes(i+1) - offTimes(i)); end z = z+1; end %% Make the vectors the same length for plotting 206 count = min([length(onTimes) length(offTimes) length(onTimeDurations) length(offTimeDurations)]); onTimes = onTimes(1:end-(length(onTimes)-count)); offTimes = offTimes(1:end-(length(offTimes)-count)); onTimeDurations = onTimeDurations(1:end-(length(onTimeDurations)-count)); offTimeDurations = offTimeDurations(1:end-(length(offTimeDurations)-count)); plotTempPlate = interp1(epochTime,plateData(:,3),onTimes); plotTempRoom = interp1(epochTime,roomData(:,3),onTimes); %% Plot the on/off stats for the unit figure(); % For runs with a large distubance use 10, else use 120 for smoothing (120 % Bang-Bang, 2 for PID) onTimeDurationsSmoothed = smooth(onTimeDurations(6:end),10); offTimeDurationsSmoothed = smooth(offTimeDurations(6:end),10); % Use (6:end) lines for runs where first few cycles are excluded from % model. Else, used, used (:) lines. %onTimeDurationsSmoothed = smooth(onTimeDurations(:),2); %offTimeDurationsSmoothed = smooth(offTimeDurations(:),2); [BX,HY1,HY2] = plotyy(onTimes,[onTimeDurations(:) offTimeDurations(:)],onTimes,[plotTempPlate(:) plotTempRoom(:)]); % Link the limits of both axes, and the labels linkprop(BX,{'Xlim','XTickLabel','Xtick'}); axis tight minBetweenTicks = 10; set(BX(1),'XLim',[onTimes(1) onTimes(length(onTimes))]); set(get(BX(1),'Xlabel'),'string','Time [HH:MM]'); set(BX(1),'XTick',[onTimes(1):minBetweenTicks*30*(epochTime(2)- epochTime(1)):onTimes(length(onTimes))]); datetick(BX(1),'x',15,'keeplimits','keepticks'); rotateXLabels(BX(1),90); set(BX(1),'YLim',[floor(min([offTimeDurations onTimeDurations])-10) ceil(max([offTimeDurations onTimeDurations])+10)]); set(BX(1),'YTick',floor(min([offTimeDurations onTimeDurations])- 10):5:ceil(max([offTimeDurations onTimeDurations])+10)); set(BX(2),'YLim',[floor(min(roomData(:,3))-1) ceil(max(plateData(:,3))+1)]); set(BX(2),'YTick',[floor(min(roomData(:,3))-1):0.5:ceil(max(plateData(:,3))+1)]); set(get(BX(1),'Ylabel'),'string','Duration [s]'); set(get(BX(2),'Ylabel'),'string','Temperature [C]'); box(BX(1),'off'); set(HY1,'marker','o') set(HY1,'LineStyle','none') set(HY1,'LineWidth',1.1) set(HY2,'LineStyle','-') set(HY2,'LineWidth',1.1) hold on; % Use (6:end) lines for runs where first few cycles are excluded from % model. Else, used, used (:) lines. plot(onTimes(6:end),onTimeDurationsSmoothed(:),'b','LineWidth',1.1); plot(onTimes(6:end),offTimeDurationsSmoothed(:),'Color',[0 .5 0],'LineWidth',1.1); %plot(onTimes(:),onTimeDurationsSmoothed(:),'b','LineWidth',1.1); %plot(onTimes(:),offTimeDurationsSmoothed(:),'Color',[0 .5 0],'LineWidth',1.1); title('Cooler Per-Cycle Metrics (On/Off Duration) vs. Time'); legend([HY1;HY2],'On','Off','Plate Temp','Room Temp','Location','NorthEast','Orientation','horizontal'); grid on; zoom on; hold off; 207 % Save png and fig for later set(gcf, 'InvertHardCopy', 'off'); screen_size = get(0, 'ScreenSize'); set(gcf, 'Position', [0 0 screen_size(3) screen_size(4)]); xlabh = get(gca,'XLabel'); set(xlabh,'Position',get(xlabh,'Position') + [0 0.04 0]); saveas(gcf, 'fig2', 'png'); saveas(gcf, 'fig2', 'fig'); %% Calculate Figure-of-Merit #1: Time spent above/below Thresholds for % Bang-Bang! Tlower = 28; Tupper = 30; startTime = []; stopTime = []; StartFound = 0; totalOutTime = 0; startSample = 100; % Start past the initial cool down to avoid false large readings for g=startSample:length(epochTime) if (((coolerData(g,3) < Tlower) || (coolerData(g,3) > Tupper)) && ~StartFound) startTime = coolerData(g,2); StartFound = 1; elseif (((coolerData(g,3) < Tlower) || (coolerData(g,3) > Tupper)) && StartFound) totalOutTime = totalOutTime + (coolerData(g,2) - coolerData(g-1,2)); else startTime = []; StartFound = 0; end end totalOutTime % in seconds max(coolerData(startSample:end,3)) % in C min(coolerData(startSample:end,3)) % in C coolerData(end,2)-coolerData(1,2) % in s mean(coolerData(startSample:end,3)) % in C (Tlower+Tupper)/2-mean(coolerData(startSample:end,3)) % in C 208 Appendix P: dataAnalysisReportUpdates.m %% dataAnalysisReportUpdates.m % Author: Joseph D. Khair, 2012-2013 % The purpose of this script is to take in the data produced by the test % fixture and stored on BeagleBone Boards and produce a detailed report. % The report will graph the raw test data for a single unit, processed data % that indicates cycle-by-cycle metrics, and other pertinent information. % This script is nearly identical to dataAnalysisReport.m but differs in % it is written to process data created from model-based data transmission % test runs. As such, the cooler data file format contains an extra column % (model update data) an extra graphs are produced. % % There are 3 sets of data written throughout the test that are used as % input to this script. % 1. dataLeft/Right.txt - temperature and cooler status (L/R unit) % 2. dataPlateTemp.txt - temperature of the Aluminum plate (edge temp) % 3. dataRoomTemp.txt - temperature of the room (environment) % Clear the workspace close all; clear all; %% Read in the data by requesting files (full path) from user coolerInFile = input('Location of Cooler unit data file? '); plateInFile = input('Location of Plate temperature data file? '); roomInFile = input('Location of Room temperature data file? '); [coolerData(:,1),coolerData(:,2),coolerData(:,3),coolerData(:,4),coolerData(:,5)] = textread(coolerInFile,'%d %d %f %d %d'); [plateData(:,1),plateData(:,2),plateData(:,3),plateData(:,4)] = textread(plateInFile,'%d %d %f %d'); [roomData(:,1),roomData(:,2),roomData(:,3)] = textread(roomInFile,'%d %d %f'); %% Ensure that all arrays are the same length. Occasionally, the % BeagleBone will skip a cycle due to the speed (or lack thereof) of the % processor. So, finished arrays can differ in length. startTime = min(roomData(:,2)); stopTime = max(roomData(:,2)); fullSpan = stopTime-startTime; lengthCoolerData = length(coolerData); i = 2; while (i < lengthCoolerData) if(coolerData(i,2) == coolerData(i-1,2)) coolerData = [coolerData(1:i-1,:);[coolerData(i+1:end,1)-1 coolerData(i+1:end,2) coolerData(i+1:end,3) coolerData(i+1:end,4) coolerData(i+1:end,5)]]; end i = i+1; lengthCoolerData = length(coolerData); end lengthPlateData = length(plateData); i = 2; while (i < lengthPlateData) if(plateData(i,2) == plateData(i-1,2)) plateData = [plateData(1:i-1,:);[plateData(i+1:end,1)-1 plateData(i+1:end,2) plateData(i+1:end,3) plateData(i+1:end,4)]]; end i = i+1; lengthPlateData = length(plateData); end lengthRoomData = length(roomData); i = 2; 209 while (i < lengthRoomData) if(roomData(i,2) == roomData(i-1,2)) roomData = [roomData(1:i-1,:);[roomData(i+1:end,1)-1 roomData(i+1:end,2) roomData(i+1:end,3) roomData(i+1:end,4)]]; end i = i+1; lengthRoomData = length(roomData); end for i=2:fullSpan+1 if(length(coolerData) > i) if(coolerData(i,2) ~= startTime+(i-1)) coolerData = [coolerData(1:i-1,:);[i coolerData(i-1,2)+1 coolerData(i-1,3) coolerData(i-1,4) coolerData(i-1,5)];[coolerData(i:end,1)+1 coolerData(i:end,2) coolerData(i:end,3) coolerData(i:end,4) coolerData(i:end,5)]]; end else coolerData = [coolerData(1:i-1,:);[i coolerData(i-1,2)+1 coolerData(i-1,3) coolerData(i-1,4) coolerData(i-1,5)]]; end if(length(plateData) > i) if(plateData(i,2) ~= startTime+(i-1)) plateData = [plateData(1:i-1,:);[i plateData(i-1,2)+1 plateData(i-1,3) plateData(i-1,4)];[plateData(i:end,1)+1 plateData(i:end,2) plateData(i:end,3) plateData(i:end,4)]]; end else plateData = [plateData(1:i-1,:);[i plateData(i-1,2)+1 plateData(i-1,3) plateData(i-1,4)]]; end if(length(roomData) > i) if(roomData(i,2) ~= startTime+(i-1)) roomData = [roomData(1:i-1,:);[i roomData(i-1,2)+1 roomData(i- 1,3)];[roomData(i:end,1)+1 roomData(i:end,2) roomData(i:end,3)]]; end else roomData = [roomData(1:i-1,:);[i roomData(i-1,2)+1 roomData(i-1,3)]]; end end %% Plot raw data summary (Single Unit) epochTime = coolerData(:,2); epochTime = ((epochTime - 25200) / 86400) + 25569; dateVecs = datevec(epochTime); adjDateVecs = [dateVecs(:,1)+1900,dateVecs(:,2),dateVecs(:,3)-1,dateVecs(:,4)- 1,dateVecs(:,5),dateVecs(:,6)]; epochTime = datenum(adjDateVecs); [AX,GY1,GY2] = plotyy(epochTime,[coolerData(:,3) plateData(:,3) roomData(:,3)],epochTime,coolerData(:,5)); set(AX(1),'XLim',[epochTime(1) epochTime(length(epochTime))]); minBetweenTicks = 10; % Link the limits of both axes, and the labels linkprop(AX,{'Xlim','XTickLabel','Xtick'}); axis tight set(AX(1),'XTick',[epochTime(1):minBetweenTicks*30*(epochTime(2)- epochTime(1)):epochTime(length(epochTime))]); datetick(AX(1),'x',15,'keeplimits','keepticks'); rotateXLabels(AX(1),90); 210 set(AX(1),'YLim',[floor(min(roomData(:,3))-2) ceil(max(plateData(:,3))+2)]); set(AX(1),'YTick',[(floor(min(roomData(:,3))-2)):0.5:(ceil(max(plateData(:,3)))+2)]); set(AX(2),'YLim',[0 ceil(max(coolerData(:,5))+1)]); set(AX(2),'YTick',[0:1:ceil(max(coolerData(:,5))+1)]); set(get(AX(1),'Ylabel'),'string','Temperature [C]'); set(get(AX(2),'Ylabel'),'string','Number of Model Updates [#]','color','black'); set(get(AX(1),'Xlabel'),'string','Time [HH:MM]'); set(AX(2),'YColor','k'); box(AX(1),'off'); set(GY1,'LineStyle','-') set(GY2,'LineStyle','-') set(GY1,'LineWidth',1.1) set(GY2,'LineWidth',1.1) title('Cooler Temperature & Model Status vs. Time'); legend([GY1;GY2],'Cooler Temp','Plate Temp','Room Temp','Model Updates','Location','SouthEast','Orientation','horizontal'); % Add threshold/setpoint to plot grid on; zoom on; hold on; % Comment out lines at 28 & 30 when processing PID data. % Only there for bang-bang processing. %plot(get(gca,'xlim'), [28 28],'color','black','LineStyle','--','LineWidth',1.8); plot(get(gca,'xlim'), [29 29],'color','black','LineWidth',1.8); %plot(get(gca,'xlim'), [30 30],'color','black','LineStyle','--','LineWidth',1.8); hold off set(gcf, 'InvertHardCopy', 'off'); screen_size = get(0, 'ScreenSize'); set(gcf, 'Position', [0 0 screen_size(3) screen_size(4)]); xlabh = get(gca,'XLabel'); set(xlabh,'Position',get(xlabh,'Position') + [0 0.04 0]); saveas(gcf, 'fig1mu', 'png'); saveas(gcf, 'fig1mu', 'fig'); %% Calculate the Statistics on a per-cycle basis onTimes = []; offTimes = []; j = 1; k = 1; for i=1:length(coolerData(:,1))-1 if ((coolerData(i+1,4) - coolerData(i,4)) == 1) onTimes(j) = epochTime(i); j = j+1; elseif ((coolerData(i+1,4) - coolerData(i,4)) == -1) offTimes(k) = epochTime(i); k = k+1; end end if(onTimes(1) > offTimes(1)) onTimes = [epochTime(1) onTimes]; else offTimes = [epochTime(1) offTimes]; end onTimeDurations = []; offTimeDurations = []; z = 1; for i=1:(length(onTimes)-1) if(onTimes(1) > offTimes(1)) onTimeDurations(z) = 86400*(onTimes(i) - offTimes(i)); offTimeDurations(z) = 86400*(offTimes(i+1) - onTimes(i)); else 211 onTimeDurations(z) = 86400*(offTimes(i) - onTimes(i)); offTimeDurations(z) = 86400*(onTimes(i+1) - offTimes(i)); end z = z+1; end %% Make the vectors the same length for plotting count = min([length(onTimes) length(offTimes) length(onTimeDurations) length(offTimeDurations)]); onTimes = onTimes(1:end-(length(onTimes)-count)); offTimes = offTimes(1:end-(length(offTimes)-count)); onTimeDurations = onTimeDurations(1:end-(length(onTimeDurations)-count)); offTimeDurations = offTimeDurations(1:end-(length(offTimeDurations)-count)); plotTempPlate = interp1(epochTime,plateData(:,3),onTimes); plotTempRoom = interp1(epochTime,roomData(:,3),onTimes); %% Plot the on/off stats for the unit figure(); % For runs with a large distubance use 10, else use 120 for smoothing (120 % bang-bang, 2 for PID) onTimeDurationsSmoothed = smooth(onTimeDurations,2); offTimeDurationsSmoothed = smooth(offTimeDurations,2); [BX,HY1,HY2] = plotyy(onTimes,[onTimeDurations(:) offTimeDurations(:)],onTimes,[plotTempPlate(:) plotTempRoom(:)]); % Link the limits of both axes, and the labels linkprop(BX,{'Xlim','XTickLabel','Xtick'}); axis tight minBetweenTicks = 10; set(BX(1),'XLim',[onTimes(1) onTimes(length(onTimes))]); set(get(BX(1),'Xlabel'),'string','Time [HH:MM]'); set(BX(1),'XTick',[onTimes(1):minBetweenTicks*30*(epochTime(2)- epochTime(1)):onTimes(length(onTimes))]); datetick(BX(1),'x',15,'keeplimits','keepticks'); rotateXLabels(BX(1),90); set(BX(1),'YLim',[floor(min([offTimeDurations onTimeDurations])-10) ceil(max([offTimeDurations onTimeDurations])+10)]); set(BX(1),'YTick',floor(min([offTimeDurations onTimeDurations])- 10):5:ceil(max([offTimeDurations onTimeDurations])+10)); set(BX(2),'YLim',[floor(min(roomData(:,3))-1) ceil(max(plateData(:,3))+4)]); set(BX(2),'YTick',[floor(min(roomData(:,3))-1):0.5:ceil(max(plateData(:,3))+4)]); set(get(BX(1),'Ylabel'),'string','Duration [s]'); set(get(BX(2),'Ylabel'),'string','Temperature [C]'); box(BX(1),'off'); set(HY1,'marker','o') set(HY1,'LineStyle','none') set(HY1,'LineWidth',1.1) set(HY2,'LineStyle','-') set(HY2,'LineWidth',1.1) hold on; plot(onTimes,onTimeDurationsSmoothed(:),'b','LineWidth',1.1); plot(onTimes,offTimeDurationsSmoothed(:),'Color',[0 .5 0],'LineWidth',1.1); title('Cooler Per-Cycle Metrics (On/Off Duration) vs. Time'); legend([HY1;HY2],'On','Off','Plate Temp','Room Temp','Location','NorthEast','Orientation','horizontal'); grid on; zoom on; hold off; 212 % Save png and fig for later set(gcf, 'InvertHardCopy', 'off'); screen_size = get(0, 'ScreenSize'); set(gcf, 'Position', [0 0 screen_size(3) screen_size(4)]); xlabh = get(gca,'XLabel'); set(xlabh,'Position',get(xlabh,'Position') + [0 0.04 0]); saveas(gcf, 'fig2mu', 'png'); saveas(gcf, 'fig2mu', 'fig'); %% Plot the on/off stats for the unit with model update information figure(); % For runs with a large disturbance use 10, else use 120 for smoothing (120 % bang-bang, 2 for PID) onTimeDurationsSmoothed = smooth(onTimeDurations,2); offTimeDurationsSmoothed = smooth(offTimeDurations,2); plotModelUpdateData = interp1(epochTime,coolerData(:,5),onTimes); [CX,IY1,IY2] = plotyy(onTimes,[onTimeDurations(:) offTimeDurations(:)],onTimes,[plotTempPlate(:) plotModelUpdateData(:)]); % Link the limits of both axes, and the labels linkprop(CX,{'Xlim','XTickLabel','Xtick'}); axis tight minBetweenTicks = 10; set(CX(1),'XLim',[onTimes(1) onTimes(length(onTimes))]); set(get(CX(1),'Xlabel'),'string','Time [HH:MM]'); set(CX(1),'XTick',[onTimes(1):minBetweenTicks*30*(epochTime(2)- epochTime(1)):onTimes(length(onTimes))]); datetick(CX(1),'x',15,'keeplimits','keepticks'); rotateXLabels(CX(1),90); set(CX(1),'YLim',[floor(min([offTimeDurations onTimeDurations])-10) ceil(max([offTimeDurations onTimeDurations])+10)]); set(CX(1),'YTick',floor(min([offTimeDurations onTimeDurations])- 10):5:ceil(max([offTimeDurations onTimeDurations])+10)); set(CX(2),'YLim',[floor(min(coolerData(:,5))) ceil(max(coolerData(:,5))+1)]); set(CX(2),'YTick',[floor(min(coolerData(:,5))):1:ceil(max(coolerData(:,5))+1)]); set(get(CX(1),'Ylabel'),'string','Duration [s]'); set(get(CX(2),'Ylabel'),'string','Number of Model Updates [#]'); box(CX(1),'off'); set(IY1,'marker','o') set(IY1,'LineStyle','none') set(IY1,'LineWidth',1.1) set(IY2,'LineStyle','-') set(IY2,'LineWidth',1.1) set(IY2,'marker','*') hold on; plot(onTimes,onTimeDurationsSmoothed(:),'b','LineWidth',1.1); plot(onTimes,offTimeDurationsSmoothed(:),'Color',[0 .5 0],'LineWidth',1.1); title('Cooler Per-Cycle Metrics (On/Off Duration) & Model Update Data vs. Time'); legend([IY1;IY2(2)],'On','Off','Model Updates','Location','SouthEast','Orientation','horizontal'); grid on; zoom on; hold off; % Save png and fig for later set(gcf, 'InvertHardCopy', 'off'); screen_size = get(0, 'ScreenSize'); set(gcf, 'Position', [0 0 screen_size(3) screen_size(4)]); 213 xlabh = get(gca,'XLabel'); set(xlabh,'Position',get(xlabh,'Position') + [0 0.04 0]); saveas(gcf, 'fig3mu', 'png'); saveas(gcf, 'fig3mu', 'fig'); %% Calculate Figure-of-Merit: Time spent above/below Thresholds for % PID! Tlower = 28.8; Tupper = 29.2; startTime = []; stopTime = []; StartFound = 0; totalOutTime = 0; startSample = 104; % Start past the initial cool down to avoid false large readings for g=startSample:length(epochTime) if (((coolerData(g,3) < Tlower) || (coolerData(g,3) > Tupper)) && ~StartFound) startTime = coolerData(g,2); StartFound = 1; elseif (((coolerData(g,3) < Tlower) || (coolerData(g,3) > Tupper)) && StartFound) totalOutTime = totalOutTime + (coolerData(g,2) - coolerData(g-1,2)); else if StartFound totalOutTime = totalOutTime + (coolerData(g,2) - coolerData(g-1,2)); end startTime = []; StartFound = 0; end end totalOutTime % in seconds max(coolerData(startSample:end,3)) % in C min(coolerData(startSample:end,3)) % in C coolerData(end,2)-coolerData(1,2) % in s mean(coolerData(startSample:end,3)) % in C (Tlower+Tupper)/2-mean(coolerData(startSample:end,3)) % in C 214 Appendix Q: modelGenerator.m %% modelGenerator.m % The purpose of this script is to used data from the previously executed % test data in the Matlab variable workspace and use it to create a model % output file that can be used by the test fixture (BeagleBone boards). To % accomplish this, the Matlab epoch times must be converted back to % unix/perl based epoch time format without corrupting the absolute time % hack. Other things of note, are that the model is constructed of smoothed % or best fit data curves to the on/off cycle time scatter data from the % previous run. Also, the array for the model is built without the first 5 % elements of the on/of cycle data. This data is purposely excluded as it % represents a period of settling time for each test that is very specific % to each test's starting conditions. % Convert from Matlab to Perl Epoch tempVecs = datevec(onTimes(6:end)); adjTempVecs = [tempVecs(:,1)- 1900,tempVecs(:,2),tempVecs(:,3)+1,tempVecs(:,4)+1,tempVecs(:,5),tempVecs(:,6)]; adjTimesBack = datenum(adjTempVecs); adjTimesBack = (adjTimesBack-25569)*86400+25200; format long g; % Write output file that will serve initial model for model-based run outMatrix = [(1:length(adjTimesBack))' adjTimesBack round(onTimeDurationsSmoothed(:)) round(offTimeDurationsSmoothed(:))] dlmwrite('output.txt',round(outMatrix),'delimiter','\t','precision','%16.f'); 215 Appendix R: Sample DTI Calculation This sample DTI calculation uses data from Case 1a for the ACMBT bang-bang control data transfer scenario presented in Section 5.3.1.2. This derivation serves as an example of the process used to derive all other DTI figures using the equation presented in Chapter 5. Due to the fact that the ACMBT test is 600 seconds longer than the data collection test, transmission during the first 6600 seconds of the data collection period (7200 s – 600 s) will be calculated. This is accomplished using Equation 5.7 and Figure 5.8. From Figure 5.8, 34 cycles are observed in the first 6600 s of the data collection period. Using Equation 5.7 a total of 6668 bytes transmitted over the data link is calculated for the traditional data transmission method. Though Equation 5.9 approximates the size of the model to ~150 bytes for a 2-hour test, closer inspection reveals that the model for this particular run was 124 bytes. Figure 5.10 indicates that 11 updates were required. Using Equation 5.9, an ACMBT total transmission of 157 bytes is found. Finally, from Equation 5.6, a DTI value of 42.5x is determined.
Abstract (if available)
Abstract
Communication requirements and demands on deployed systems are increasing daily. This increase is due to the desire for more capability, but also, due to the changing landscape of threats on remote vehicles. As such, it is important that we continue to find new and innovative ways to transmit data to and from these remote systems, consistent with this changing landscape. Specifically, this research shows that data can be transmitted to a remote system effectively and efficiently with a model-based approach using real-time updates, called Algorithmically Corrected Model-based Technique (ACMBT), resulting in substantial savings in communications overhead. ❧ To demonstrate this model-based data transmission technique, a hardware-based test fixture was designed and built. Execution and analysis software was created to perform a series of characterizations demonstrating the effectiveness of the new transmission method. The new approach was compared to a traditional transmission approach in the same environment, and the results were analyzed and presented. ❧ A Figure of Merit (FOM) was devised and presented to allow standardized comparison of traditional and proposed data transmission methodologies alongside bandwidth utilization metrics. The results of this research have successfully shown the model-based technique to be feasible. Additionally, this research has opened the trade space for future discussion and implementation of this technique.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A declarative design approach to modeling traditional and non-traditional space systems
PDF
Increased fidelity space weather data collection using a non-linear CubeSat network
PDF
Extending systems architecting for human considerations through model-based systems engineering
PDF
An analytical and experimental study of evolving 3D deformation fields using vision-based approaches
PDF
Integration of digital twin and generative models in model-based systems upgrade methodology
PDF
Incorporation of mission scenarios in deep space spacecraft design trades
PDF
An approach to experimentally based modeling and simulation of human motion
PDF
Modeling and simulation testbed for unmanned systems
PDF
Radio localization techniques using ranked sequences
PDF
Stochastic peridynamics and upscaling
PDF
Functional based multi-level flexible models for multivariate longitudinal data
PDF
Experimental and numerical investigations of charging interactions of a dusty surface in space plasma
PDF
Designing an optimal software intensive system acquisition: a game theoretic approach
PDF
Studies into computational intelligence approaches for the identification of complex nonlinear systems
PDF
Three-dimensional exospheric hydrogen atom distributions obtained from observations of the geocorona in Lyman-alpha
PDF
Cooperative localization of a compact spacecraft group using computer vision
PDF
The development of an autonomous subsystem reconfiguration algorithm for the guidance, navigation, and control of aggregated multi-satellite systems
PDF
Laboratory investigations of the near surface plasma field and charging at the lunar terminator
PDF
The cathode plasma simulation
PDF
Efficient learning: exploring computational and data-driven techniques for efficient training of deep learning models
Asset Metadata
Creator
Khair, Joseph Daniel
(author)
Core Title
Reduction of large set data transmission using algorithmically corrected model-based techniques for bandwidth efficiency
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Astronautical Engineering
Publication Date
11/04/2013
Defense Date
10/22/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
ACMBT,algorithm,bandwidth,data,Model,model-based,OAI-PMH Harvest,reduction,transmission
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Erwin, Daniel A. (
committee chair
), Flashner, Henryk (
committee member
), Hill, Lisa (
committee member
), Jaeger, Talbot (
committee member
), Kunc, Joseph (
committee member
)
Creator Email
jkhair@gmail.com,khair@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-344419
Unique identifier
UC11296388
Identifier
etd-KhairJosep-2129.pdf (filename),usctheses-c3-344419 (legacy record id)
Legacy Identifier
etd-KhairJosep-2129.pdf
Dmrecord
344419
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Khair, Joseph Daniel
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
ACMBT
algorithm
bandwidth
data
model-based
reduction
transmission