Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A 1.2 V micropower CMOS active pixel sensor
(USC Thesis Other)
A 1.2 V micropower CMOS active pixel sensor
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A 1.2 V Micropower CMOS Active Pixel Sensor Copyright 2001 by Kwang-Bo Cho A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Electrical Engineering) May 2001 Kwang-Bo Cho R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. UMI Number: 3 0 2 7 7 0 4 Copyright 2001 by Cho, Kwang-Bo All rights reserved. ___ ® UMI UMI Microform 3027704 Copyright 2001 by Bell & Howell Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. Bell & Howell Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, Ml 48106-1346 R eproduced with perm ission o f the copyright owner. Further reproduction prohibited without perm ission. UNIVERSITY OF SOUTHERN CALIFORNIA The Graduate School University Park' LOS ANGELES, CALIFORNIA 900894695 This d i s s e r t a t i o n , wr i t t en b y Under th e direction o f his. . . D issertation Co mm i t t e e , , an d a p p r o v e d by al l its m e mb e r s , has been p resen ted to and accepted b y The Graduate School, in p a rtia l f ul f i l l ment o f requirem ents fo r th e degree o f DOCTOR OF PHILOSOPHY Dean o f G raduate S tu dies D ate M ay 11. 2001 DISSER TA TIO N COMMITTEE 12k fVW \ Chairperson R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. To my parents My wife And my daughter R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Acknowledgments Many people deserve my thanks and appreciation for their contribution to the successful completion of my graduate school carrier. First of all, I would like to express my great gratitude for Professor John Choma, Jr. who is my research advisor and chairman of my dissertation committee and Dr. Eric R. Fossum who is my research co-advisor, for their support, leadership, guidance and vision throughout this work. I would like to thank Professor Martin A. Gundersen and Professor Armand R. Tanguay, Jr. for being on my dissertation committee. I would like to acknowledge many useful discussions with Dr. Alexander Krymski, who was my supervisor created the environment in which my research thrived in Photobit Corp. I would like to thank all those who helped me along the way, particularly: Dr. Konstatin Postnikov and Alexey Yakovlev, for helping me with the board design and the user interface in software. The faculty and staff at USC, especially Mona Gordon, for their assistance over the years. My coworkers at Photobit, especially Anders Andersson, Claudine Antonino, Suat Ay, Dr. Sandor Bama, Dr. Daniel Van Blerkom, Dr. Scott Campbell, Steve Huang, Dr. Michael Kaplinsky, Angela McNamee, Roger Panicacci, Richard Tsai, Robert Vo, Michelle Wang, for their continuous help. iii R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Finally, my family was always there for me with unfailing emotional, spiritual, and material support. I would like to express my sincere gratitude to my wife and parents. I can not express with words my deep gratitude to them. I dedicate this work to them for their love. R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Table of Contents TABLE OF CONTENTS.. ......... v LIST OFFUGURES................. ix LIST OF TABLES ...... xiii ABSTRACT................ xiv CHAPTER 1 INTRODUCTION........................................... 1 1.1 M otivation a nd Go a l s ........................................................... 2 1.2 A pplications.............................................................................................................................5 1.3 Thesis Orga niza tio n............................................. 6 1.4 A cknow ledgment.................................................................................................................. 8 CHAPTER 2 CMOS IMAGE SENSORS.............. 9 2.1 B rief H istorical Background on Im age Se n s o r s...............................................10 2.2 Comparison of CMOS APS to CCD Technology..................................................15 2.3 Trends for CMOS Image Se n so r s................................................................................16 2.4 Image Sensor A pplications.......................................................... 18 2.5 CMOS Image Sensor A rchitecture............................................................................19 2.6 Pixel Circuits............................................... 21 2.6.1 Passive Pixel Approach.............................................................................. 21 2.6.2 Active Pixel Approach................................................................................ 24 2.7 On -chip A nalog Signal Processing............................................................................28 2.8 On -chip A n a lo g-to-D igital Converter................................................................... 28 v R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 2.9 On -chip Color Pr o c e ssin g .................................................................................... 31 2.10 Su m m a r y............................................................................................................................... 35 CHAPTER 3 LOW-POWER DESIGN METHODOLOGY FOR CMOS IMAGE SENSORS ............................ 37 3.1 Power M easure.....................................................................................................................38 3.2 Low-Power D esign M eth o d o lo g y..............................................................................42 3.2.1 Power Reduction through Process Technology................................................... 43 3.2.2 Power Reduction through Circuit/Logic D esign.................................................46 3.2.3 Power Reduction through Architectural D esign.................................................50 3.2.4 Power Reduction through Algorithm Selection...................................................54 3.2.5 Power Reduction through System Integration..................................................... 5 7 3.3 B attery-Operated Sensor Consid era tio n s...........................................................61 3.4 Su m m a r y..................................................................................................................................63 CHAPTER 4 A 1.2 V MICROPOWER CMOS ACTIVE PIXEL IMAGE SENSOR.................................................................. 64 4.1 Sensor Chip A rchitecture..............................................................................................65 4.2 A nalog B uilding B l o c k s................................................................................................ 67 4.2.1 P ixel.....................................................................................................................................70 4.2.2 Analog Signal Chain....................................................................................................... 71 4.2.3 Global Amplifier...............................................................................................................73 4.2.4 Self-Calibration Successive Approximation A D C .................................................77 4.2.5 Reference Circuit.............................................................................................................83 vi R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 4.2.6 Bias Circuit........................................................................................................ 86 4.2.7 Power-On-Reset................................................................................................. 87 4.2.8 Bootstrapping Switch.........................................................................................88 4.3 D igital B uilding B lo c k s................................................................................................. 89 4.3.1 Row and Column Select Logic..........................................................................89 4.3.2 Row Driver.................. 90 4.3.3 On-Chip Clock Generator................................................ 91 4.3.4 Timing & Control Block....................................................................................92 4.3.5 Data Coding Logic.............................................................................................94 4.4 Su m m a r y..................................................................................................................................95 CHAPTER 5 EXPERIMENTAL RESULTS ...... 97 5.1 First-Generation Image Senso r...................... 97 5.1.1 First-Generation Sensor Micrograph.............................................................. 97 5.1.2 First-Generation Sensor Characterization......................................................99 5.1.3 Test Images.......................................................................................................102 5.2 Second-Generation Image Senso r............................................................................ 103 5.2.1 Second- Generation Sensor Micrograph........................................................ 103 5.2.2 Second- Generation Sensor Characterization................................................104 5.2.3 On-Chip Clock Generator.............................................................................. 109 5.2.4 Test Images.......................................................................................................110 5.3 Comparison........................................................................................................................... 112 5.4 Scalability........................................................................................................................... 113 vii R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.5 Su m m a r y............................................................................. 114 CHAPTER 6 CONCLUSION ....... 116 REFERENCES................................ 120 APPENDIX ......................................... 128 A .l System In te g r a tio n........................................................................................................128 A.1.1 Data Communication................................................................................... 128 A.I.2 Phase-Locked Loop (PLL).............................................................................. 129 A. 1.3 Optic Considerations.......................................................................................132 A. 1.4 User Interface.................................................................................................. 135 A.2 P ix e l C h a r a c t e r iz a tio n ............................................................................................... 137 A.2.1 Conversion Gain..............................................................................................137 A.2.2 Dark Current................................................................................................... 138 A.2.3 Quantum Efficiency.........................................................................................139 A.3 N oise Co n sid era tio n s.................................................................................................... 139 A.3.1 Noise Sources ofCMOSAPS.... ............ 139 A. 3.2 SNR and Dynamic Range............................................................................... 142 A.3.3 Power Supply Noise Margin........................................................................... 146 A .4 A lternative P ower So u r c e s......................................................................................147 viii R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. List of Figures Figure 1.1: Basic function flow of a digital camera.......................................................... 2 Figure 1.2: Power consumption level of image sensors....................................................3 Figure 2.1: The steadily increasing ratio between pixel size and minimum feature size permits the use of CMOS circuitry within each pixel.............................................. 17 Figure 2.2: Frame rate and resolution for image sensor applications.............................18 Figure 2.3: Generic CMOS image sensor architecture....................................................20 Figure 2.4: Photon to bits ................................ 20 Figure 2.5: Passive pixel schematic and potential well. When the transfer gate TX is pulsed, photogenerated charge integrated on the photodiode is shared on the bus capacitance (After Fossum, [ 1 ])................................................................................. 22 Figure 2.6: A photodiode-type APS. The voltage on the photodiode is buffered by a source follower to the column bus, selected by RS-row select. The photodiode is reset by transistor RST (From Fossum, [1])..............................................................25 Figure 2.7: Photogate-type APS pixel schematic and potential wells. Transfer of charge and correlated double sampling permits low noise operation (From Fossum, [1])..................................................................................................................27 Figure 2.8: Color interpolation.......................................................................................... 32 Figure 3.1: Low-voltage CMOS switch............................................................................39 Figure 3.2: Comparison of previously reported imager sensors in terms of new power figure of merit (nJoules/pixel). (Note: semi-logarithmic scale).............................. 41 Figure 3.3: Design process ........................ 42 ix R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Figure 3.4: Frequency divider, (a) divide-by-2 element, (b) divide-by-2N circuit....... 49 Figure 3.5: Low-power technique by reducing the internal bus swing..........................50 Figure 3.6: ADC architectures, (a) pixel-parallel output, (b) serial output, (c) column- parallel multiplexed-output. (d) column-parallel parallel-output............................51 Figure 3.7: Power estimation for low and high-speed ADCs......................................... 52 Figure 3.8: Low-power design steps.................................................................................60 Figure 3.9: Typical battery load characteristic.................................................................61 Figure 4.1: Sensor’s block diagram...................................................................................66 Figure 4.2: Pixel to ADC signal path................................................................................68 Figure 4.3: Relative row and column timing....................................................................69 Figure 4.4: CMOS active pixel sensor layout and schematic......................................... 70 Figure 4.5: Analog signal chain.........................................................................................72 Figure 4.6: Operational transconductance amplifier........................................................74 Figure 4.7: Frequency response of the OTA.................................................................... 75 Figure 4.8: Block diagram of the successive approximation ADC................................77 Figure 4.9: The successive approximation ADC using a binary-weighted capacitor array D AC................................................ 78 Figure 4.10: The successive approximation ADC using additional capacitor...............79 Figure 4.11: Low-power 8-bit successive approximation ADC..................................... 82 Figure 4.12: Conventional bandgap reference circuit......................................................84 Figure 4.13: Low-voltage bandgap reference circuit.......................................................85 Figure 4.14: Bias circuits for (a) Vln and (b) Vref.......................................................... 87 x R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Figure 4.15: Power-on-reset circuit...................................................................................88 Figure 4.16: Bootstrapping circuit.....................................................................................89 Figure 4.17: Dynamic shift-register......................................... 90 Figure 4.18: Row driver with bootstrapping switch........................................................ 91 Figure 4.19: On-chip clock generator..............................................................................92 Figure 4.20: Data format:.................................................................................................. 94 Figure 4.21: Low-power design steps in this research.................................................... 96 Figure 5.1: Sensor core micrograph of the first-generation image sensor.....................98 Figure 5.2: Measured sensor core power consumption at 20 fps from 1.1 to 1.7 V power supply .................................................................................................. 101 Figure 5.3: Test images at different power supply voltages................ 102 Figure 5.4: Second-generation image sensor..................................................................103 Figure 5.5: Measured power consumption at 30 fps from 1.2 to 1.7 V with 25.2 MHz on-chip clock.............................................................................................................. 105 Figure 5.6: Measured power consumption at 20 fps from 1.2 to 3.3 V with the external 16.5 MHz clock........................................................................................................107 Figure 5.7: Measured power consumption at 1.5 V power supply for different frame rates............................................................................................................................. 108 Figure 5.8: Measured power consumption at 2.7 V power supply for different frame rates............................................................................................................................. 108 Figure 5.9: Measured frequency response of on-chip clock generator........................ 109 Figure 5.10: Test images with on-chip clock at 30 fps............................................... 110 xi R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Figure 5.11: Test images with 1.5 V power supply, (a) 20 fps. (b) 40 fps.................. I l l Figure 5.12: Test images with 2.7 V power supply, (a) 20 fps. (b) 40 fps...................I l l Figure 5.13: Comparison of the first and the second-generation sensors with previously reported image sensors in terms of new power figure of merit (nJoules/pixel). (Note: semi-logarithmic scale)...................................................... 113 Figure A.l: Data communication............................... 129 Figure A.2: Phase lock loop (PLL)..................................................................................130 Figure A.3: The physical layout o f PLL. ....... 131 Figure A.4: Optics calculator for the 176 x 144 5 pm pixel pitch micropower sensor. ........................................... 135 Figure A.5: PC user interface...........................................................................................136 Figure A.6: Temporal noise and fixed pattern noise vs. signal.................................... 143 Figure A.l: Signal to Temporal Noise Ratio vs. Signal................................................145 Figure A.8: Power supply noise margin..........................................................................147 x ii R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. List of Tables Table 2.1: History of image sensors...................................................................... 14 Table 2.2: Major analog-to-digital conversion techniques in a CMOS image sensor..31 Table 3.1: Sensor output type and specification.............................................................. 41 Table 3.2: Scaling laws ofMOS devices..........................................................................43 Table 3.3: CMOS process and power supply trends in CMOS image sensors............. 44 Table 3.4: 3-bit binary code and gray code...................................................... 54 Table 5.1: Specification and measured sensor performance at 1.2 V and 6.5 fps.........99 Table 5.2: Estimated chip core power portfolio at 1.2 V power supply and 20 fps... 100 Table 5.3: Specification and measured sensor performance at 1.5 V and 5 fps..........104 Table 5.4: Estimated chip power portfolio with 30 fps at 1.5 V power supply...........106 Table 5.5: Comparison of the first-and the second-generation sensors....................... 112 Table 5.6: Scalability for different image formats at 1.5 V power supply.................. 114 Table A.l: Examples of ambient energy sources...........................................................148 xiii R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Kwang-Bo Cho John Choma, Jr. Abstract A 1.2 V Micropower CMOS Active Pixel Sensor The problem to be addressed in this dissertation is the development of a micropower CMOS active pixel sensor that dissipates two orders of magnitude less power than current state of the art CMOS image sensors and occupies only a few square milimeters in area. The resulting micropower camera-on-a-chip would require so little power that it could be run on a watch battery. In order to achieve design goals, a low-power low-voltage design methodology is developed and applied throughout the design process from system-level to process- level, while realizing the performance to satisfy the design specification. As the first-generation low-power sensor, a micropower 176 x 144 CMOS APS with an on-chip 8-bit analog-to-digital converter (ADC) that operates at 20 frames per second (fps) from a 1.2 V power supply is implemented. The sensor core that includes the pixel array, row / column logic, analog readout, ADC, and biases, dissipates only 48 pW at 20 fps. Even with 1.2 to 3.3 V level-shifting I/O pads, overall xiv R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. dissipation remains below 1 mW. The sensor is implemented in 0.35 pm 2 P 3 M CMOS technology. As the second-generation image sensor for a self-clocked image sensor, this sensor can be operated with only 3 pads (GND, VDD (1.2-1.7 V), DATAOUT). The measured power consumption of the overall chip with the internal 25.2 MHz on-chip clock (30 fps) at 1.5 V power supply is about 550 pW. We believe that this chip is the world’s lowest power image sensor and the first image sensor designed for a watch battery operation. x v R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Chapter 1 Introduction Up until recently, the dominant image sensor technology that converts photons to electrons for display or storage purposes was the charge-coupled device (CCD). Over the past five years, there has been a growing interest in complementary metal- oxide semiconductor (CMOS) image sensors. The major reason for this interest is customer demand for portable, low-power, miniaturized cost-effective imaging systems. Because CMOS process technology is used for all modem microprocessor, memory chips, and application specific integrated circuits (ASICs), CMOS image sensors can offer the potential opportunity to integrate a significant amount of very large scale integration (VLSI) electronics on-chip and reduce component and packaging costs [21]. Today there are many kinds of imaging systems using image sensors with very different characteristics and applications. Despite the wide variety of applications, if we look at a digital camera, all digital cameras have the same basic function flow as shown in Figure 1.1. These are (1) optical collection of photons, i.e. a lens, (2) wavelength discrimination of photons, i.e. color filters, (3) detector for conversion of photons to electrons e.g. a photodiode pixel, (4) an analog signal processing to readout the detectors e.g. a sample and hold, (5) analog-to-digital conversion e.g. analog-to- 1 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. digital converter (ADC), (6) digital signal processing electronics for color processing, etc., and (7) format conversion and interface electronics. ............. lllllllllllllllllllllllll llllll | Optics | (Ldns) • totettD-Lera Color fW * i C f A ) ■-> Pnuln L e tbi-liu & aiiaiiililiMliiiiiiiaSii Analog Signal Processing -► i i Anajog-to- § Digital jL*. Conversion j. !!!!!!!!!!!!iiiii^ SS P il'.CIO' ‘in eg t: r-::.:-ts=inyi ■ j ■ Format Conversion and Interface Figure 1.1: Basic function flow of a digital camera. CMOS image sensors that use the mainstream microelectronics CMOS fabrication process realize electronic “film” as a digital camera. The trends for these image sensors are toward higher speed, lower power, lower cost, higher resolution, and more functionality. For instance, a 500 frames per second 1024 x 1024 8-bit CMOS active pixel sensor with 450 mW power consumption designed by A. Krymski et al. was reported in 1999 [35]. 1.1 Motivation and Goals Low-voltage operation and low-power dissipation are becoming key device/circuit/system-level drivers in microelectronic industries including the image sensor industry. Scaled CMOS technology is suited as the engine for both the low- voltage image sensor systems and the forthcoming low-power image sensor revolution. Reduction of the power supply voltage is a key element in low-power 2 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. CMOS image sensors. Even with the reduction of the voltage, however, active power The success and utilization of the low-voltage low-power CMOS image sensor will depend on the ability to match digital and analog functionality, logic and system level integration in a cost-effective way. Therefore, new design techniques and power management aimed at reduction of active and standby power are needed in the CMOS image sensor. consumption will grow due to the high rate of density and performance improvements. Power Consumption s ' / CCDs \ / CMOS \ \ 1980’s 1990’ s 2000+ Year Figure 1.2: Power consumption level of image sensors. 3 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Especially, portable multimedia electronic systems using image sensor for low- voltage low-power operation expected to dominate the future electronics markets, use batteries as energy sources. The battery lifetime is an essential competitive factor and power consumption level for image sensors will be from a few mW to a few hundred pW [12]. However the power consumption level of current state of the art CMOS image sensors is from a few hundred mW to a few ten mW as shown in Figure 1.2. So the demand for a new low-power battery-operated image sensor that can be used in the next generation of potable electronics such as cellular phone, portable digital assistants (PDAs), wireless security systems, and toys is increased. The problem to be addressed in this work is the development of a battery- operated miniature CMOS image sensor with the architecture “Camera-on-a-chip”. This sensor dissipates one-to-two orders of magnitude less power than current state of the art CMOS image sensors. Yet, this image sensor features all timing and control on the chip, and provides digital video output. So our objective is to develop a micropower CMOS active pixel sensor that occupies a few square millimeters in area and dissipates two orders of magnitude less power than current state of the art CMOS image sensors, to be powered from a watch battery. Our specific goals are to develop, first, an image sensor operated at 1.5 V power supply and possibly in wider (1.0 - 3.6 V) voltage range, second, a sensor that dissipates less than 1 mW, one-to-two orders of magnitude less power than current state of the art CMOS image sensors, third, “Camera-on-a-Chip” architecture that occupies a few square millimeters in area and provides 8-bit digital video output, 4 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. fourth, a self-clocked image sensor that can be operated with 3 pads (VDD, GND, DATAOUT), and fifth, a low-power design methodology that can be used in CMOS image sensors. 1.2 Applications The resulting low-voltage low-power CMOS image sensor could be powered by a watch battery for a period of a few weeks, even months. We expect that this sensor will find wide potential commercial applications such as cellular phones, personal digital assistants (PDAs), wireless security systems, and toys. They include "embedded" applications, in which a camera system is made an integral part of another system (i.e. products that traditionally don't have a camera). Toy manufacturers, for example, can take advantage of a micropower sensor's ultra-low power and small form-factor to add vision to electronic trains, race cars, planes, and helicopters. Access-control companies that use fingerprints or facial recognition systems for identification can extend their markets to include remote or wireless products, such as smart ID cards. The company also foresees smart pens, with built-in optical character recognition (OCR) and dictionary; home appliances that can be remotely viewed or controlled via the Internet; and tactic sensors used in a distributed vision system for interactive gaming. This chip is staking its success on CMOS image sensors replacing traditional charge-coupled devices (CCDs), which draw more power than CMOS devices, limiting their application in smaller and lighter electronic 5 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. products. Wireless imaging markets are demanding less power consumption and ultra-compactness. In addition to commercial applications, this sensor can also be useful for military and NASA. For instance, a dozen of the miniature low-power sensors could be placed inside a spacecraft for environmental control, or be mounted into a spacesuit or a helmet, or be used in the future unmanned micro-size vehicles. 1.3 Thesis Organization This chapter introduced the research motivation and goals. The remaining chapters discuss the primary contributions of this work: 1) the first 1.2 V CMOS active pixel image sensor with 48 pW chipcore power consumption is developed, 2) the first self-clocked image sensor operated with 3 pads (VDD, GND, DATAOUT) and showed more than 10 times improvement in power consumption compared to state of the art CMOS image sensors, 3) we believe that this chip is the world’s lowest power image sensor and the first image sensor designed for a watch battery operation, and 4) low-voltage low-power design techniques have been successfully proven and the possibility has been shown of wide voltage-range operation (1.0-3.6 V). Chapter 2 thoroughly describes a general CMOS active pixel sensor (APS). In this chapter, CMOS image sensors are overviewed. First, a historical background on image sensors including CCDs and CMOS image sensors is reviewed. Also we compare CMOS APS to CCD technology and shows technical trends for CMOS APS. 6 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Second, image sensor applications are described. Third, a generic CMOS image sensor architecture and its functional block are discussed. Chapter 3 discusses a low-power design methodology in a CMOS image sensor. In this chapter, first, a new figure of merit for power consumption in CMOS image sensors is proposed. Second, the low-power design methodology in CMOS image sensors from process-level to system-level is discussed. Third, especially design considerations for battery-operated image sensors are considered. Chapter 4 proposes a 1.2 V micropower CMOS active pixel image sensor. This chapter presents an image sensor that is designed for 1.2 V operation and dissipates one-to-two orders of magnitude less power than current state of the art CMOS image sensors. The image sensor architecture and detail analog and digital building blocks in this micropower image sensor are described. Chapter 5 reports experimental results. This chapter describes measurement results for the first and the second-generation micropower CMOS image sensors. First, the first-generation micropower CMOS image sensor is characterized and test images is showed. Second, the second-generation micropower CMOS image sensor is characterized. Also the measurement results of an on-chip clock generator is discussed. Third, we compare the first and the second-generation sensors, and show the comparison of the first and the second-generation sensors with previously reported image sensors in terms of a new power figure of merit described in Chapter 3. Fourth, the scalability for different image formats is discussed. 7 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Chapter 6 summarizes major accomplishments achieved in this research and presents ideas for future research. Appendix describes, first, overall system integration with data communication, phase-locked loop, optic consideration and user interface. Second, pixel characterization and noise sources in CMOS image sensors are discussed. Finally, alternative power supply sources are explored. 1.4 Acknowledgment The research was supported under a Small Business Innovative Research (SBIR) program contract for the U.S. Defense Advanced Research Projects Agency. 8 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Chapter 2 CMOS Image Sensors In the era of multimedia, highly integrated image sensor systems play a crucial role to facilitate information flow, consumer electronics, and communication. A variety of multimedia applications require advanced image sensor techniques which enable to handle high-resolution pixel array (a huge quantity of data), to achieve high speed frame rate, and to be operable with battery-powered for a long period of time. In order to meet these strict requirements for multimedia applications, there are major trends in image sensor technologies. The trends for image sensors are toward higher speed, lower power, lower cost, higher resolution, and more functionality. In this chapter, these image sensors are overviewed. First, a historical background on image sensors including charge-coupled device (CCD) and CMOS image sensor is reviewed. Also we compare CMOS active pixel sensor (APS) to CCD technology and shows technical trends for CMOS APS. Second, image sensor applications are described. Third, a generic CMOS image sensor architecture and its functional block are discussed. R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 2.1 Brief Historical Background on Image Sensors The overview of an image sensor history can provide a useful opportunity to understand both past and present background of this exciting industry. This session is based on the plenary paper written by E. R. Fossum [21], Before CMOS APS and before CCDs there were MOS image sensors. In the 1960’s there were numerous groups working on solid-state image sensors with varying degrees of success using NMOS, PMOS and bipolar processes. In 1967, Weckler at Fairchild suggested operating p-n junctions in a photon flux integrating mode [68]. The photocurrent from the junction is integrated on a reverse-biased p-n junction capacitance. Readout of the integrated charge using a PMOS switch was suggested. The signal charge, appearing as a current pulse, could be converted to a voltage pulse using a series resistor. A 100 x 100 element array of photodiodes was reported in 1968 [18]. Also, in the UK. In a 1968 seminal paper, Noble described several configurations of self-scanned silicon image detector arrays [51]. Both surface photodiodes and buried photodiodes (to reduce dark current) were described. Noble also discussed a charge integration amplifier for readout, similar to that used later by others. In addition, the first use of a MOS source-follower transistor in the pixel for readout buffering was reported. An improved model and description of the operation of the sensor was reported by Chamberlain in 1969 [9]. The issue of fixed-pattem noise (FPN), which is a fixed output signal variation among pixels in a image sensor under the same operating and uniform light condition, was explored in a 1970 paper by Fry, Noble, and Rycroft [25]. 10 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Until recently, FPN has been considered the primary problem with MOS and CMOS image sensors. In 1970, when the CCD was first reported [7], its relative freedom from FPN was one o f the major reasons for its adoption over the many other forms of solid-state image sensors. The smaller pixel size afforded by the simplicity of the CCD pixel also contributed to its embrace by industry. Since the CCD’s inception, the main focus of research and development has been CCD sensor performance. The camcorder market has driven impressive improvements in CCD technology. Criteria include quantum efficiency, optical fill factor (fraction of pixel used for detection), dark current, charge transfer efficiency, smear, readout rate, lag, readout noise and full well, i.e., dynamic range. A desire to reduce cost and optics mass has driven a steady reduction in pixel size. HDTV and scientific applications have driven an increase in array size. Recently, emphasis has been placed on improved CCD functionality, such as electronic shutter, low power and simplified supply voltages. There have been several reports of integrating CMOS with CCDs to increase CCD functionality [2,13,42], but with the exception of some line arrays, the effort has not been fruitful due to both cost and the difficulty of driving the large capacitive loads of the CCD. While a large effort was applied to the development of the CCD in the 1970’s and 1980’s, MOS image sensors were only sporadically investigated and compared unfavorably to CCDs with respect to the above performance criteria [53]. In the late 1970’s and early 1980’s Hitachi and Matsushita continued the development of MOS image sensors [54,60] for camcorder-type applications, including single-chip color 11 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. imagers [4]. Temporal noise in MOS sensors started to lag behind the noise achieved in CCDs, and by 1985, Hitachi combined the MOS sensor with a CCD horizontal shift register [3]. In 1987, Hitachi introduced a simple on-chip technique to achieve variable exposure times and flicker suppression from indoor lighting [33]. However, perhaps due to residual temporal noise, especially important in low light conditions, Hitachi abandoned its MOS approach to sensors. It is interesting to note that in the later 1980’s, while CCDs predominated in visible imaging, two related fields started to turn away from the use of CCDs. The first was hybrid infrared focal-plane arrays that initially used CCDs as a readout multiplexer. Due to limitations of CCDs, particularly in low temperature operation and charge handling, CMOS readout multiplexers were developed that allowed both increased functionality as well as performance compared to CCD multiplexers [28]. A second field was high energy physics particle/photon vertex detectors. Many workers in this area also initially used CCDs for detection and readout of charge generated by particles and photons. However, the radiation sensitivity of CCDs and the increased functionality offered by CMOS [16] has led to subsequent abandonment of CCD technology for this application. In the early 1990’s though, two independently motivated efforts have led to a resurgence in CMOS image sensor development. The first effort was to create highly functional single-chip imaging systems where low cost, not performance, was the driving factor. This effort was spearheaded by separate researchers at the University of Edinburgh in Scotland (later becoming W L ) and Linkoping University in Sweden 12 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. (later becoming TVP). The second independent effort grew from NASA’s need for highly miniaturized, low-power, instrument imaging systems for next-generation deep space exploration spacecraft. Such imaging systems are driven by performance, not cost. This latter effort was led by the U.S. Jet Propulsion Laboratory (JPL) with subsequent transfer of the technology to AT&T Bell Labs, Kodak, National Semiconductor and several other major US companies, and the startup of Photobit. The convergence of the efforts has led to significant advances in CMOS image sensors and the development of the CMOS APS. It has performance competitive with CCDs with respect to read noise, dynamic range and responsivity but with vastly increased functionality, substantially lower system power (10-50 mW), and the potential for lower system cost. The first high-performance 128 x 128 pixel array APS was demonstrated by JPL in 1993 [43]. Also arrays as large as 1024 x 1024 with 10 pm pixel pitch in a 0.5 pm process have been developed by JPL/AT&T collaboration [21]. The another worth-noted one is that A. Krymski et al. from Photobit Corp. introduced a 500 frames per second (fps) 1024 x 1024 8-bit CMOS image sensor with 450 mW power consumption in 1999 [35]. In 2000, K.-B. Cho et al. from Photobit Corp. reported a 1.2 V micropower active pixel image sensor [12], Photobit has also devised a 500 frames per second 1.3- megapixel resolution CMOS image sensor with a freeze-frame electronic shutter, a development aimed at closing the gap between CCDs and CMOS sensors in advanced imaging applications. Foveon announced a CMOS image sensor with a resolution of 4,096 x 4,096, which is about twice the resolution of 35 mm film by some measures. 13 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. In 2001, S. Kleinfelder et al. reported 10,000 fps 352 x 288 CMOS digital pixel sensor [34]. Table 2.1 summarizes the history of image sensors to be a recognizable event. Table 2.1: History of image sensors. Year Event 1960’s Early work on MOS imaging devices (Not work well due to state of the art of MOS) 1967 G. P. Weckler at Fairchild proposed a reverse-biased p-n junction MOS imager 1968 R. Dyck et al. reported 100 x 100 array of photodiodes chip P. Noble reported the first use of a source-follower in pixel for readout buffering 1970’s CCDs work better than MOS devices 1970 W. S. Boyle et al. reported the first CCD 1980’s CCDs dominate and limited work on MOS/CCD imagers 1982 M. Aoki et al. reported MOS single-chip color imager 1990’s CMOS APS is developed and shows the exponential improvement 1993 First high-performance 128 x 128 array size CMOS APS demonstrated (JPL) 1995 CMOS APS as large as 1024 x 1024 demonstrated (JPL/AT&T) 1999 Photobit reported 500 fps 1024 x 1024 digital image sensor with 450 mW 2000’s CMOS APS provides high-speed, low-power, high-performance, low-cost imaging system 2000 Photobit reported 1.2 V micropower active pixel image sensor Photobit demonstrated 1.3 megapixel CMOS APS with a freeze-frame electronic shutter Foveon announced a 4,096 x 4,096 CMOS image sensor 2001 S. Kleinfelder et al. reported 10,000 fps 352 x 288 CMOS digital pixel sensor 14 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 2.2 Comparison of CMOS APS to CCD Technology Over the past five years, CMOS image sensors have received much attention in the electronics industry. Compared to its CCD counterpart, CMOS image sensors consume lower power, allow random access and can be integrated with analog and digital functional blocks in a standard CMOS process. The cost for these advantages is a reduction in the image quality, namely an increase in sources of noise, such as dark current and fixed pattern noise. A large portion of the cost and effort of a CCD fabrication process is dedicated to minimizing the pixel dark current and improving the efficiency of the light-to-voltage conversion. In a standard CMOS process, however, the imager designer has little to control over the fabrication steps. The challenge of a CMOS imager designer therefore becomes utilizing innovative circuit techniques to achieve CCD quality images in a standard CMOS technology. One of the exciting benefits of combining CMOS with the active pixel concept is that imaging system power is greatly reduced from traditional CCD-based imaging systems. System power is typically reduced ten-fold (lOx) in functionally equivalent systems. This saving comes from several sources. First, CCDs are very capacitive devices and operating them requires significant current levels in the horizontal and vertical driving circuits. The APS, in which only a single row needs to be addressed for readout, uses much less readout power. Second, the CMOS APS operates from a single 5 V supply (or 3.3 V), unlike CCDs that require several different power supplies at higher voltages. Thus, the reduced voltage (power is current-times- voltage) and the fact that generating different voltages itself requires power, means 15 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. lower power used by a CMOS APS-based camera. Third, the ADC architecture uses less power than a conventional off-the-shelf ADC or even a conventional ADC integrated on chip. Fourth, since a CCD has analog output at high frequency, the analog output must settle quickly to preserve dynamic range. This means the drive current for the analog output is quite high for a CCD. In contrast, the CMOS APS with on-chip ADC has digital output. Digital output is much more robust and needs to be far less accurate in voltage (since it is either ‘on’ or ‘o ff) so that the output driving circuits draw much less current. In fact, the difference in output driving power more than compensates for the power used by the ADC itself, so that the CMOS APS ADC is nearly “free” with respect to power cost. Finally, the CMOS APS camera-on-a-chip can be readily placed in a low power standby mode for portable applications to further enhance the average system power savings. 2.3 Trends for CMOS Image Sensors A CMOS process is the standard semiconductor technology used to make nearly all modem integrated circuits, there is an enormous worldwide effort to continually improve the technology both in terms of reducing feature size, improving silicon quality, reducing wafer fabrication cost, and increasing wafer size. Thus, by using standard CMOS as a basis for the image sensor, one gets a “free-ride” on this effort resulting in a low-cost path for pixel size reduction and performance improvement. Contributing to the recent activity in CMOS image sensors is the steady, exponential improvement in CMOS technology. 16 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 100 Image Sensor Pixel Size CCD Invented 10 Size (um) Practical Optical Limit CMOS Feature Size 0.1 1980 1970 1975 1990 1995 2000 2005 1985 Year Figure 2.1: The steadily increasing ratio between pixel size and minimum feature size permits the use of CMOS circuitry within each pixel. The rate of minimum pixel size decrease has followed similar improvements in CMOS technology as shown in Figure 2.1. The sensor pixel size is already limited by both optical physics and optics cost for most applications. Recent progress in on-chip signal processing (and off-chip digital signal processing (DSP)) has also reduced CMOS image sensor FPN to acceptable levels. In addition, the transition from analog imaging and display systems to digital cameras tethered to personal computer permits digital FPN correction with negligible system impact. 17 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 2.4 Image Sensor Applications 1.0E+04 ] 1.0E+03 \ N X | 1.0E+02 i Q > 1.QE+01 1.0E+00 i - 1.0E+03 Physical Phenomenon Deferse Machine Vision vlaec Conferonce Digital Soil Camera Astronomical Observation 1.0E+07 1.0E+08 1.0E+06 1.0E+05 1.0E+04 Resolution (Pixels) Figure 2.2: Frame rate and resolution for image sensor applications. Image sensor applications can be divided by frame rate (speed) and resolution (the number of pixels) as shown in Figure 2.2. The low-resolution, low-speed image sensor will be useful for wireless video communication and automobile. The high- resolution, low-speed image sensor will enables high-resolution, low-power, miniature cameras at lower cost. They are very important for consumer camera markets {digital 18 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. still cameras, camcorders, and high-end video conferencing), machine vision, and digital cinematography. The resolution required for a film movie camera replacement is minimally 2048 x 3072 pixels, going to 2048 x 4096 pixels. Space and physics applications such as astronomical observation, astrobiology, and the measurement of physical phenomenon will also benefit from the sensor development by creating miniature high-resolution and low-power cameras. Military and defense applications will also benefit from high-speed and high-resolution image sensor. 2.5 CMOS Image Sensor Architecture First of all, we need to define a general architecture of CMOS image sensor, so we can use it as our ground point to determine power consumption. Figure 2.3 shows a generic architecture of the state of the art CMOS image sensors. It consists of pixel, analog signal processing (ASP), row/column select logic, analog-to-digital converter, timing & control generator, biases, digital signal processing functions (color processing, coding, and compression), communication functions, and interface. In CMOS image sensors, we look at the essential signal path from photon to bits as shown in Figure 2.4. Lens collects photons, pixel converts photons to electrons to volts, volts send to analog signal processing for sampling and amplification, and ADC converts volts to bits. 19 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Master Clock I i ui ‘- ■ y ~ it) y z «i -► s — ^ r i m •J 1 J § g § J J tn Ui .*y 4*) .......................... f J-J-AA-TJ — T J " - ! AA-iAAJ ■ a ^ l * nisiyn.^ ^ 1=^ ...na|. j 4 S P T T T T 1 — T Ooluwfi ----------------------- Vdd Gnd c c C O I B w •£ u; C s i o. £ £ £ o o o o Analog Die Output Data Figure 2.3: Generic CMOS image sensor architecture. Clock Data Photon -* > o m iC S O ix el O u H 'O m n tiI.ISfcr* v (O E, (.oiive1 sun GAIN iuV /p.) Bits Figure 2.4: Photon to bits. At the analog signal processing stage, fixed-pattem noise is cancelled. Sampling is performed, and then a gain step, before the pixel data is sent to the analog-to-digital converters. The analog-to-digital conversion can performed in a pixel, a column-parallel fashion, or serially. 20 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 2.6 Pixel Circuits Pixel circuits can be divided into passive pixels and active pixels. The active pixel sensor contains an active amplifier. There are three predominant approaches to pixel implementation in CMOS: photodiode-type passive pixel, photodiode-type active pixel, and photogate-type active pixel. These are described below. 2.6.1 Passive Pixel Approach The photodiode-type passive pixel approach remains virtually unchanged since first suggested by Weckler in 1967 [18,64], The passive pixel concept is shown below in Figure 2.5. It consists of a photodiode and a pass transistor. When the access transistor M l is activated, the photodiode is connected to a vertical column bus. A charge integrating amplifier (CIA) readout circuit at the bottom of the column bus keeps the voltage on the column bus constant and reduces kTC noise [51]. When the photodiode is accessed, the voltage on the photodiode is reset to the column bus voltage, and the charge, proportional to the photosignal, is converted to a voltage by the CIA. The single-transistor photodiode passive pixel allows the highest design fill factor for a given pixel size or the smallest pixel size for a given design fill factor for a particular CMOS process. A second selection transistor has sometimes been added to permit true X-Y addressing. The quantum efficiency of the passive pixel (ratio of collected electrons to incident photons) can be quite high due to the large fill factor and absence of an overlying layer of polysilicon such as that found in many CCDs. 21 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. TX _l_ M i COL BUS Figure 2.5: Passive pixel schematic and potential well. When the transfer gate TX is pulsed, photogenerated charge integrated on the photodiode is shared on the bus capacitance (After Fossum, [21]). The major problems with the passive pixel are its readout noise level and scalability. Readout noise with a passive pixel is typically of the order of 250 electrons r.m.s., compared to commercial CCDs that achieve less than 20 electrons r.m.s. of read noise. The passive pixel also does not scale well to larger array sizes and or faster pixel readout rates. This is because increased bus capacitance and faster readout speed both result in higher readout noise. To date, passive pixel sensors suffer from large fixed pattern noise from column amplifiers, though this is not a fundamental problem. A CMOS passive pixel cell has a promising implementation that could potentially reduce the effects of the pixel dark current, and FPN [26]. A differential architecture in which the output of a sensing pixel is compared to that of a dummy pixel kept in the dark, is used to reject any common-mode signals such as ground 22 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. bounce and temperature variations. This differential readout scheme can also be used to subtract the sensing pixel dark current from the dummy cell's dark current, thus reducing the effects of the dark current. The passive pixel consists of a high- efficiency n-well photodiode and a transistor for row select. The output of the pixel, which is in the form of charge, is converted to a voltage with a sense amplifier at the bottom of every column. Consistent with the low number of transistors per cell, the passive pixel has few sources for fixed pattern noise. There is one inherent weakness for passive pixels, however. Long wavelength radiation (red and near IR) is absorbed very deep in the substrate of the photodiode. Some of these photogenerated charges will find their way to the depletion region of the photodiode while others may be swept up by the reverse-biased diffusion of the column line. The combined effect of the charge leakage from cells in the same column line can be significant and will appear as a parasitic current at every column line. Though this parasitic current is also present in active pixels, its effect is more pronounced in passive pixels because charge amplification does not occur within the cell. Fortunately, this signal dependent parasitic current can be removed with correlated double sampling, making the passive pixel a competitive choice in the implementation of CMOS imagers. While CMOS imagers have not yet achieved the superior imaging quality of CCD's, they offer many advantages for applications where high image quality is not essential, but where low power, low cost and high integration is desired. Some examples include surveillance, biomedical and videoconferencing applications. 23 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 2.6.2 Active Pixel Approach It was quickly recognized, almost as soon as the passive pixel was invented, that the insertion of a buffer/amplifier into the pixel could potentially improve the performance of the pixel. A sensor with an active amplifier within each pixel is referred to as an active pixel sensor or APS. Since each amplifier is only activated during readout, power dissipation is minimal and is generally less than a CCD. In general, APS technology has many potential advantages over CCDs [20] but is susceptible to residual FPN and has less maturity than CCDs. 2.6.2.1 Photodiode-type APS The photodiode (PD) APS was described by Noble in 1968 [51]. A schematic of the PD APS is shown in Figure 2.6 with PD shown as the potential well. It consists of a source follower transistor M2 along with a row selection transistor M3 and a reset transistor M l. A current load of the source follower is shared by all pixels in the same column at the bottom of each column. The photodiode is reset by the reset transistor Ml. After reset, the photo-induced charge starts to accumulate in a p-n junction of the photodiode. After a desired integration time, an entire row of pixels is selected for readout by turning on the row selection transistor M3. The voltage on the gate of M2, which is the voltage Vsig on PD, is driven onto the column bus. The entire row of pixels is then reset again and the reset level voltage Vrst on PD is driven onto the column bus. The two voltages on the column bus, Vsig and Vrst, can be sampled and 24 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. held in two sampling capacitors in the column analog signal chain. The difference of Vsig and Vrst is taken as the output signal of the pixel. VDD M i M2 RST M3 RS PD COL BUS Figure 2.6: A photodiode-type APS. The voltage on the photodiode is buffered by a source follower to the column bus, selected by RS-row select. The photodiode is reset by transistor RST (From Fossum, [21]). Photodiode-type APS pixels have high quantum efficiency as there is no overlying polysilicon. The read noise is limited by the reset noise on the photodiode since correlated double sampling is not easily implementable without frame memory, and is thus typically 50-100 electrons r.m.s. The photodiode-type APS uses three transistors per pixel and has a typical pixel pitch of 15x the minimum feature size. The photodiode APS is suitable for most mid to low performance applications, and its ■ I j'y performance improves for smaller pixel sizes since the reset noise scales as C , where C is the photodiode capacitance. A tradeoff can be made in designed pixel fill-factor 25 with perm ission of the copyright owner. Further reproduction prohibited without perm ission. (photodiode area), dynamic range (full well) and conversion gain (pV/e'). Lateral carrier collection permits high responsivity even for small fill-factor [71]. 2.6.2.2 Photogate-type APS The photogate APS was introduced by JPL in 1993 [43,44,46] for high performance scientific imaging and low light applications. The photogate APS combines CCD benefits and X-Y readout, and is shown schematically below in Figure 2.7. Compared with the photodiode pixel, it adds a photogate (PG) and a transfer gate (TX) to the photodiode pixel. Signal charge is integrated under PG. For readout, an output floating diffusion (FD) is reset by the reset transistor M l and its resultant voltage Vrst measured by the source follower with turning on the row selection transistor M3. The charge is then transferred to the output diffusion by pulsing the photogate to a low voltage, resulting in moving electrons from under PG to FD. The new voltage Vsig is then sensed. The difference between the reset level Vrst and the signal level Vsig is the output of the sensor. This sampled twice known as correlated double sampling (CDS), suppresses fixed pattern noise and correlated temporal noise significantly. The photogate and transfer gate ideally overlap using a double poly process. However, the insertion of a bridging diffusion between PG and TX has minimal effect on circuit performance and permits the use of single poly processes [45]. The photogate-type APS uses five transistors per pixel and has a pitch typically equal to 2Ox the minimum feature size. 26 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. VDD RST "j M1 -----1 1 M2 PG TX J _ RS-| M3 tm— est FD COL BUS Figure 2.7: Photogate-type APS pixel schematic and potential wells. Transfer of charge and correlated double sampling permits low noise operation (From Fossum, Noise in the sensor is suppressed by the correlated double sampling of the pixel output just after reset, before and after signal charge transfer to the floating diffusion. The correlated double sampling suppresses kTC noise from pixel reset, suppresses 1/f noise from the in-pixel source follower, and suppresses fixed pattern noise originating from pixel-to-pixel variation in source follower threshold voltage. kTC noise is reintroduced by sampling the signal onto the 1 - 4 pF capacitors at the bottom of the column. The floating diffusion capacitance is typically of the order of 10 fF yielding a conversion gain of 10-20 pV/e\ Subsequent circuit noise is of the order of 150-250 pV r.m.s., resulting in a readout noise of 10-20 electrons r.m.s., with the lowest noise [21]). 27 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. reported to date of 5 electrons r.m.s.[71], This is similar to noise obtained in most commercial CCDs, even scientific CCDs have been reported with read noise in the 3-5 electrons r.m.s. 2.7 On-chip Analog Signal Processing An on-chip analog signal processing can be used to improve the performance and functionality of a CMOS image sensor. A charge integration amplifier is used for passive pixel sensors and sample and hold circuits typically employed for active pixel sensors. JPL has developed a delta-double sampling (DDS) approach to suppress FPN peak-to-peak to 0.15 % of saturation level [49]. Once FPN is cancelled, and then a gain step by the programmable gain amplifier (PGA), before the pixel data is sent to the analog-to-digital converters. Other examples of signal processing demonstrated in CMOS image sensors include smoothing using motion detection [15], programmable amplification [74], multiresolution imaging [32], video compression [1], discrete cosine transform (DCT) [31], intensity sorting [8], and cellular neural networks (CNN) [19]. Continued improvement in analog signal processing performance and functionality is expected. 2.8 On-chip Analog-to-Digital Converter An on-chip ADC is desirable for several reasons. First, the chip becomes “digital” from a system designer’s perspective, easing system design and packaging. Second, digital I/O improves immunity from system noise pickup. Third, component 28 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. count is reduced. Fourth, while not immediately apparent, lower system power can be achieved, and possibly lower chip power dissipation as well. To implement a camera-on-a-chip with a full digital interface requires an on- chip ADC. There are general considerations for the on-chip ADC. The ADC must support video rate data that ranges from 0.92 Msamples/s (sps) for a 320 x 288 format sensor operating at 10 frames per second for videoconferencing, to 55.3 Msamples/s for a 1280 x 720 format sensor operating at 60 frames per second. The ADC must have at least 8-bit resolution with low integral non-linearity (INL) and differential non-linearity (DNL) so as not to introduce distortion or artifacts into the image. The ADC can dissipate only minimal power, typically under 100 mW, to avoid introduction of hot spots with excess dark current generation. The ADC cannot consume too much chip area or it will void the economic advantage of on-chip integration. The ADC cannot introduce noise into the analog imaging portion of the sensor through substrate coupling or other crosstalk mechanisms that would deteriorate image quality. There are specific considerations for implementation of on-chip ADC [55], The ADC can be implemented as a single serial ADC (or several ADCs, e.g. one per color) that operate at near video rates. The ADC can also be implemented in-pixel [23,34,40] and operate at frame rates. We have generally been pursuing column- parallel ADCs where each (or almost each) column in the pixel array has its own ADC, so that each ADC operates at the row rate [e.g. 15 Ksamples/s]. In this architecture, single-slope ADCs work well for slow-scan applications. Oversampled 29 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. ADCs require significant chip area when implemented in column-parallel formats. A successive approximation ADC has a good compromise of power, bit resolution, and chip area. The on-chip ADC enables an on-chip DSP for sensor control and compression preprocessing. Integrating an ADC on chip with the image sensor permits on-chip digitization of the image data, and digital off-chip drive. There are many different ADC circuits and techniques available that one can choose based on speed-resolution-power-area requirements for an image sensor. Table 2.2 summarizes the major ADC types available today. In addition to the type of ADC, the number of ADC circuits that can be implemented on the chip can also be selected based on demand. If ADC for each column in the sensor (so-called column-parallel approach) is used, the speed requirement of the ADC components will be reduced by the number of columns in the array. Other alternatives in choosing the number of ADC circuits are to use only one high-speed ADC for the whole chip, or use multiple ADCs (but significantly less than the number of columns). Our study has convincingly shown that a column parallel architecture makes the most sense for high-resolution, high-speed and low-power operation. However, a single ADC will avoid the addition of fixed pattern noise, which can be introduced by multiple ADCs. Still, the less speed demanding feature of multiple ADCs may be considered as an advantage. 30 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Table 2.2: Major analog-to-digital conversion techniques in a CMOS image sensor. ARCHITECTURE RESOLUTION POWER ENERGY PER SAMPLE SPEED ADVANTAGES / DRAWBACKS FLASH 8 bits >50 mW ~50nJ/S lOMsps -1 Gsps + Extremely fast + High input bandwidth - Highest power consumption - Large die size - High input capacitance - Expensive - Sparkle codes* SUCCESIVE- APPROXIMATION 8 b its-16 bits -1 0 0 pW ~100pJ/S 75 Ksps - 2 Msps + High resolution and accuracy + Low power consumption + Few external components - Low input bandwidth - Limited sampling rate - VIN must remain constant during conversion INTEGRATING >14 bits ~1 mW -1 0 nJ/S < 1 0 0 Ksps + High resolution + Low supply current + Excellent noise rejection - Low speed SIGMA-DELTA (S-A) >14 bits >10 mW -5 nJ/S > 200 Ksps + High resolution + High input bandwidth + Digital on-chip filtering - External T/H - Limited sampling rate PIPELINE 10 bits— 14 bits >10 mW lOnJ/S 10 Msps -100 Msps + High throughput rate + Low power consumption + Digital error correction and on- chip self-calibration - Requires 50% duty cycle typical - Requires minimum clock frequency *Sparkle codes are erratic errors caused by metastable comparators or out-of-sequence output codes (thermometer bubbles), which in turn are caused by unequal comparator delays at higher frequencies. 2.9 On-chip Color Processing This section is adopted from Photobit Corporation web site - www.photobit.com. Getting 24-Bit color from RGB (Red Green Blue) color image sensors typically put out sequential RGB color, since each pixel is covered by either a red, a green, or a blue filter. To obtain red, green, and blue information from each pixel (and 8 bits for each color, a 24-bit RGB signal per pixel), color interpolation is 31 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. needed as shown in Figure 2.8. This process averages out the color values of appropriate neighboring pixels to, in effect, guess each pixel’s unknown (filtered out) color data. For example, if one of the green pixels on a GRGR sequence line of the Bayer pattern is being read out, the process of color interpolation guesses that pixel’s blue value by looking at the blue above and below it and taking the average of those blues. For the red guess, the process looks at the reds to the left and right of the green pixel and averages those. As long as the color in the image changes slowly in the spatial dimension relative to the filter pattern, color interpolation works well. But for edges of objects, or fine details, color may be interpolated incorrectly and artifacts can result. For example, a small white dot in a scene might illuminate only a single blue pixel. The white dot might come out blue if it is surrounded by black or some other color, I Figure 2.8: Color interpolation. 32 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. depending on what comes out of the interpolation. This is called aliasing. One way to reduce aliasing is to use a blurring (or anti-aliasing) filter, which deliberately discards fine details. Defocusing the camera lens does almost the same thing. The initial 24-bit RGB triplet of data is no guarantee of faithful color rendition on a computer monitor. It is raw color and must be balanced. The blue signal, for example, is a combination of the blue photons from the scene, multiplied by the relative response of the blue filter, multiplied by the relative response of the silicon to blue photons. The filter and silicon responses might be quite different from a person’s eye response, so that blue to the sensor is quite different from blue to a human being. R' i v J * ic - i ~R G' = gr gg gb G B' 1 O - < * } i B To make a blue that is acceptable to human vision, the sensor blue is processed further. It can be multiplied by some coefficient to strengthen or weaken it (gain), and some green or red can be added to make it truer. (The same can be achieved for the red and green portions of the signal.) To express this processing mathematically, the new blue (B’) is related to the old blue (B) and red (R) and green (G) according to: B ’ = br xR + bg x G + bb xB (2.2) where br, bg, and bb are the weights for each of the mix of red, green, and blue to the new blue. Corresponding equations can be written for red and green. This is basically a matrix operation where the weights define a color-correction matrix. The weights, or 33 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. values, which may be either positive or negative, can be determined by calibrating the sensor against a Gretag-Macbeth Color Checker (or similar) chart. Since it is known what the colors are supposed to look like (from the chart) and what is obtained as raw color from the sensor, it is possible to work backwards to determine the color- correction matrix. This procedure is less than perfect, because the sensor response in the red, green, and blue spectral regimes is different from the human eye’s response, but the weights can still be determined with meaningful accuracy. The human eye and brain are capable of white balancing. If a person takes a white card outside, it looks white. If he takes it inside under fluorescent lights, it looks white. Switching to an incandescent light bulb, the paper still looks white, and, even placed under a yellow light bulb, within a few minutes, it will look white. With each of these light sources, the white card is reflecting a different color spectrum, but the brain is smart enough to make it look white. Getting a machine to do the same thing is harder. When the white card moves from light source to light source, an image sensor sees different colors under the different conditions. Consequently, when a digital camera is moved from outdoors (sunlight) to indoor fluorescent or incandescent light conditions, the color in the image shifts. If the white card looks good indoors, for example, it might look bluish outside. If it looks great under fluorescent light, it might look yellowish under an incandescent lamp. The color of the lighting is sometimes referred to as color temperature. (This comes from blackbody physics: the higher the temperature of a blackbody, the more 34 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. the color shifts from red to yellow to blue). The sun, for example, has a color temperature of 5,900 K, and tungsten-filament lightbulbs range from 2,400 K to 3,000 K. To correct for light source color-temperature changes, the balance between red, green, and blue has to be shifted. In digital camera systems, this “white balancing” is performed by an algorithm, often automatically, either on-chip or using software. One of the troubles with color processing is that interpolating a color requires averaging values from the surrounding pixels. This involves the image being deliberately blurred. Aperture correction deblurs or sharpens the image, using mathematical image processing. (This term may have come from the optical analogy of reducing the aperture, or decreasing the f-number, to increase image sharpness.) There are numerous ways to perform aperture correction. Mathematically, it is accomplished by increasing the gain on the high-frequency components. This makes edges in the image sharper, but it also blows up any noise in the image, since pixel-to- pixel variations are amplified as well. Various algorithms can be used for different effects, such as recognizing edges and sharpening them but not sharpening flat regions of the image. Such sharpening was used to vastly improve the Hubble Space Telescope pictures, before the telescope optics were replaced and repaired a few years ago. 2.10 Summary In this chapter, CMOS image sensors was overviewed. First, a historical background on image sensors including charge-coupled device and CMOS image 35 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. sensor was discussed. Also CMOS active pixel sensor and CCD technology was compared and technical trends for CMOS APS was also showed. Second, image sensor applications were described. Third, a generic CMOS image sensor architecture and its functional block were discussed. Especially pixel circuit, analog signal processing, and analog-to-digital converter, color processing were discussed in detail. 36 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Chapter 3 Low-Power Design Methodology for CMOS Image Sensors In the twenty-first century, we believe that a wireless multimedia system is one of key electronics. As a result, the demand for a battery-operated image sensor that can be used in the next generation of portable electronics (e.g. cellular phones, portable digital assistants (PDAs), wireless security systems, and toys) will be increased, which requires low-voltage and low-power. The battery lifetime is an essential competitive factor and its power consumption level is from a few mW to a few hundred pW in general. Reducing an operating power is one of the most important design issues that will confront a future wireless multimedia system. Therefore a low-power design methodology for CMOS image sensors to consider cost, power, speed, and performance (quality) according to a given specification is needed. The most important issue in low-power CMOS image sensors is adopting an appropriate design philosophy to handle the power budget, considering both analog and digital sections. Of course, a decision range is in practice limited by available technology, interfacing requirement, and power supply at the given specification. 37 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. In this chapter, first, a new figure of merit for power consumption in CMOS image sensors is proposed. Second, the low-power design methodology in CMOS image sensors from process-level to system-level is presented. Third, especially design considerations for battery-operated image sensors are considered. 3.1 Power Measure The power consumption of a CMOS image sensor can be divided by dynamic power and static power consumption, Power - Powerd y n a m jc + Power,ta tic , and given by the following formula: Power = VCVddVppf + V ddIb ia s + V d d ^ sh o rt-c irc u it + ^ d d ^ le a k a g e (3-1) where ij is the transition probability of the output,/is the operating frequency, C is the equivalent capacitance of the circuit, V dd is the power supply voltage, V pp is the peak- to-peak voltage, hia s is the bias current, Ish o rt-c irc u ii is the short-circuit current, and h ea k a g e is the leakage current, respectively. Dynamic power consumption, the switching component of power, arises when the capacitive load, C, of a CMOS circuit is charged through PMOS transistors to make a voltage transition from 0 to the high voltage level. On the high voltage to 0 transition at the output, no charge is drawn from the supply, however, the energy stored in the capacitor is dissipated in the pull-down NMOS device. In general, the switching will not occur at a clock rate, but rather at some reduced rate which is best 38 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. described probabilistically, tj is defined as the average number of times in each clock cycle that a node will make a power consuming transition. Vi Vdd T~ G N D Vo gds nmos pmos V d d = 3 .3 V Vi |Vtp| | Vdd-Vtn gds nm os pmos V d d = 1.2 V Vi Vdd-Vtn |Vtp| Figure 3.1: Low-voltage CMOS switch. Let us look at static power consumption. First, the bias current is usually for voltage/current reference and bias circuits in analog circuitry. Second, finite rise and fall times of input waveforms at the input of logic gates result in a short current path between Vdd and GND, which exists for a short period of timing during switching. An important point to note is that if the supply is lower than the sum of the thresholds of the transistors, Vdd < Vtn + \Vtp\, the short-circuit currents can be eliminated. 39 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Because both devices will not be on at the same time for any value of input voltage as shown in Figure 3.1. Third, there are two types of leakage currents: reverse-bias diode leakage on the transistor drains and subthreshold leakage through the channel of an off device [1 0 ]. Most of all cases, static power consumption is fixed at certain power supply voltage. If we look at dynamic power consumption more closely in terms of cost (size), power supply, speed (frequency), and performance (signal-to-noise ratio: SNR) with respect to a given specification, the following formula can be achieved Powerd y n a m ic = TjCVddVppf = ijkTfSNR, (3.2) where t] is the dimensionless constant, k is Boltzmann’s constant, T is the absolute temperature,/is the frequency, and SNR is the signal-to-noise ratio, respectively. Therefore the effective ways to reduce dynamic power consumption are to reduce the switched capacitance, lower the power supply voltage, reduce the speed, decrease the temperature, and lower the performance. In order to compare the power consumption performance of CMOS image sensors reported in the literature, a new figure of merit is chosen. This figure of merit (Joules/pixel) is power consumption divided by the product of frame size and frames per second (fps). Power Measure (Joules/pixel) - Power Consumption / (Frame size X fps). (3.3) 40 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Still a comparison is quite difficult to do since its conclusion depend on the specific application and performance requirement. So we categorize the sensor by its output type and specification shown in Table 3.1. We believe that this measure is a good indicator of the relative power performance for imaging applications. Table 3.1: Sensor output type and specification. A Output Type Specification 1 Analog output 2 Digital output On-chip ADC 3 Digital output On-chip timing & control generator 4 Digital output On-chip color processing, coding 5 Digital output On-chip compression, communication 1000 5 V, 0.8 nm [38] A A=1 ■ A=2 x A = 3 • A=4 5 V, 0.6 u rn [14] 100 3.3V ,0.8jtm [62] 5 V, 1.2 nm [50] 5 V, 0.8 pin [75] .M Y JL5junJ721 10 3.3 V, G.6jmv[65] 1.2 V, 0.35 urn [12] _[first (w PAD)] 3.3 V, Q.6p.m [52] 5 V.0.5 nm [41] 1 1.2 V, 0.35jim [12] [first (w/o PAD)] 0.1 0.01 1997 1998 1999 2000 2001 1996 Year (ISSCC) Figure 3.2: Comparison of previously reported imager sensors in terms of new power figure of merit (nJoules/pixel). (Note: semi-logarithmic scale) 41 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Figure 3.2 shows the power measure of state of the art for CMOS image sensors presented at IEEE international solid-state circuits conference (ISSCC) in previous years [12,14,38,41,50,52,57,62,65,72,75]. Clearly we can recognize the trend for low-power consumption. 3.2 Low-Power Design Methodology Design \ Specification / Process Technology Algorithm Selection System Integration Circuit/Logic Design Architectural Design Figure 3.3: Design process. In order to optimize the power dissipation of an image sensor, the low-power methodology should be applied throughout the design process from process level to system level, while realizing the performance to satisfy design specifications, as shown in Figure 3.3. Depend on design specifications, the low-power design 42 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. methodology can be quite different. For example, if the process technology is specified in design specifications, then there is no need to consider. 3.2.1 Power Reduction through Process Technology First of all, the trends for device scaling are considered [70]. In the past, the minimum lithographic feature size has been decreasing by 0.7 times every three years, while the chip size has been increasing at 1.5 times every three years. As shown in Table 3.2, based on the constant electric field scaling law, the lateral and the vertical dimensions of a MOS device are scaled down by a factor of s (5> 1). In order to maintain the internal electric field, the supply voltage and the threshold voltage need to be scaled down by factor of s. Therefore, the substrate doping density is increased by s times and the drain current is decreased by s times. The delay is shrunk by s times and power consumption becomes smaller by s2 times, hence the power delay product is decreased by s3 times. Table 3.2: Scaling laws of MOS devices. Parameter Scaling Factor W, L, toX 1 Is Vdd, Vt 1/5 C o x 1/5 Parasitic Capacitance 1/s Id 1/5 Delay 1/5 Substrate Doping 5 Power Consumption 1/s1 Power-Delay Product Us5 43 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. According to the international technology roadmap for semiconductors in 1999, the value of the power supply voltage is given as a range as shown in Table 3.3 [59]. Generally digital circuitry can have a benefit most from the next generation technology such as area, speed, and power performance. From the scaling laws, the most advanced technology for low-power consumption should be chosen. However, CMOS image sensors are more performance sensitive than digital circuitry, thus they require a stable, well-characterized technology. Eventually a CMOS image sensor production lags one-generation behind the CMOS roadmap technology is predicted. From this point of view, a 0.35 pm CMOS technology as the preferred design technology for this research is chosen. Table 3.3: CMOS process and power supply trends in CMOS image sensors. Year 1999 2001 2002 2003 2005 2008 2011 2014 CMOS Technology 180 nm 130 nm 100 nm 70 nm 50 nm 35 nm Power Supply Voltage (V) 1.5- 1.8 1.2-1.5 1.2- 1.5 0 .9-1.2 0 .9 - 1.2 0.6 - 0.9 0.5 - 0.6 0 .3 -0 .6 CMOS Technology for CMOS Image Sensor 500 nm 350 nm 250 nm 180 nm 130 nm 100 nm 70 nm 50 nm Power Supply Voltage (V) for CMOS Image Sensor 3.3 - 5.0 2.5-3.3 1.8-2.5 1 .5 - 1.8 1.2 -1.5 0.9 - 1.2 0.6 - 0.9 0 .5 -0 .6 At the process level, the techniques to minimize power are: Through the small geometry and junction capacitance, the trends are toward more reduction of the power supply voltage, threshold voltage, capacitance, and higher density of integration. Reduction of the power supply voltage is driven by several factors: reduction of power dissipation, reduced transistor channel length, and 44 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. reliability of gate dielectrics. Obviously the most effective way to reduce power dissipation is to lower the power supply voltage Vdd. While digital circuits can work without too many problems in such supply conditions, a delay increases significantly, particularly when Vdd approaches the threshold voltage. To overcome this problem, the device should be scaled properly. As CMOS image sensor designers reduce Vdd, they must also drop the threshold voltage Vt of the transistors for faster switching speeds. But for every 100 mV reduction in the Vt, leakage and standby current jump by a factor of 10. So new analog circuits and architectures must be developed to keep the similar performance without dropping the threshold voltage with respect to the operation at higher supply voltages. Anyway changing the process technology and parameter is not desirable for most CMOS image sensors. Through the improvement of the current drive capabilities due to the improvement of device characteristics and the interconnection technology, the bias current in analog circuitry to keep similar performance of the old process technology can be reduced. Through the availability of multiple and variable threshold devices, so called multiple threshold CMOS (MTCMOS) process which is below 0.25 pm process [47,48], new low-voltage and low-power circuits can be created for the power- performance optimization. For example, the multiple threshold technique can be used to reduce the subthreshold leakage current. The low threshold devices are used to realize the logic block such that switching speed can be enhanced. This logic block is not connected to Vdd and GND directly. Instead it is connected to the virtual power 45 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. lines which are connected to Vdd and GND via the power switch transistors with a high threshold voltage. Through other process technology, silicon on insulator (SOI) technology can be considered. Typical SOI designs beyond 0.25 )im process use 40-200 nm thick silicon films [70], this means that the photo charge collection depth for SOI substrates will be drastically reduced from the bulk substrate, especially long wavelength photons. On the other hand, the smaller parasitic capacitance of SOI as compared to bulk substrates is a clear advantage on power consumption. But still SOI active pixel image sensor is premature. 3.2.2 Power Reduction through Circuit/Logic Design To minimize power consumption at circuit/logic level, optimization, activity- driven power-down, clever circuit and layout techniques can be used. Through the optimization of dynamic power consumption, for given SNR and speed (delay: t<j), transistor, capacitor C, and resistor sizes can be optimized. For constraints for SNR and speed of analog signal chain are ljkT / C ’ (3.4) rjkTSNR (3.5) V V d d pp Td O C ijCV. d d (3-6) 46 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. where rj is the dimensionless constant, k is Boltzmann’s constant, T is the absolute temperature, / is the operating frequency, V dd is the power supply voltage, V pp is the peak-to-peak voltage, V t is the threshold voltage, a is the dimensionless constant, respectively. The most important basic building blocks in CMOS image sensors are an operational amplifier (opamp) and ADC. In a well-designed low-voltage opamp, the minimum supply voltage value is imposed by the differential pair of the input stage, and is equal to a threshold voltage plus two overdrive voltages (V d S )• For 0.35 pm CMOS process, this value turns out to be around 1 V. On the other hand, the main limitation of differential pairs consists in the reduced input common-mode range. To avoid this drawback, the simplest solution could consist in using amplifiers connected in inverting configuration [17]. If the amplifier input stage is an n-channel differential pair, at 1 V supply voltage for instance, the dc input common-mode component must be very close to the power supply voltage, while to ensure rail-to-rail output swing. If we look at ADC closely in CMOS image sensors as shown in Table 2.4 in Chapter 2, a successive approximation ADC is the most effective energy per sampling ADC. Constraints due to capacitors in the successive approximation ADC are < j]kT22 N ’ (3.7) jjkT2N + l ^ u n i t ~ ~ -i (3.8) r e f 47 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. where fs is the sampling frequency, P is power consumption, N is the resolution bit, Cu „ it is the unit capacitance, and Fr e /is the reference voltage, respectively. On the other hand, constraints due to resistors in a flash ADC [39] is f < -----^ .........= -------U ------- (3.9) V refCpN ln(2 ) RCpNMl) where^S is the sampling frequency, Ir e f is the reference current, N is the resolution bit, Cp is the parasitic capacitance, V r e /is the reference voltage, respectively. Through the logic optimization, the switching activity can be reduced in timing & control generation block and digital signal processing (DSP) blocks. As a result, clock and bus loading can be optimized. Also re-encoding sequential circuits can be reduced dynamic power consumption. Through the activity-driven power-down technique, power can be reduced. First, in the logic design more static style over dynamic style is used. Second, power management techniques where unused blocks in analog circuitry are shutdown by power cutoff switch can be used. Third, in the digital domain clock-gating technique to reduce dynamic power in timing & control and DSP blocks can be applied. In particular, latches are only clocked when useful data is available at their inputs. This is achieved by locally gating the master clock with a data ready signal. As you know, a synchronous circuit is either quiescent or active entirely. An asynchronous circuit, in contrast, only consumes energy when and where active. The classical example of a low-power asynchronous circuit is a frequency divider. In many digital signal processing functions the clock rate exceeds the data or signal rate 48 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. by a large factor. A D-flipflop (Dff) with its inverted output fed back to its input divides an incoming clock frequency by two as shown in Figure 3.4. A cascade of N such divide-by-two elements divides the incoming frequency by 2N . The entire asynchronous cascade consumes, over a given period of time, slightly less than twice the power of its head element, independent of N. In contrast, a similar synchronous divider would dissipate in proportion to N [6 ], f / 2 f / 2 Div2 Div2 Div2 (a) (b) Figure 3.4: Frequency divider, (a) divide-by-2 element, (b) divide-by-2N circuit. Another example is the sensor is nominally operated in stand-by mode and all pixels are held in reset and draws only stand-by current while waiting for event detector signal but can be placed in normal integration mode as soon as a signal arrives so that it loses no data [2 2 ]. Through the additional clever circuit technique, an internal voltage swing can be minimized in digital and analog circuitry blocks through reduction Vpp in non- critical paths and proper transistor sizing as shown in Figure 3.5. In this case, a level- 49 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. shifter can convert low-voltage signal swings to high-voltage signal swings required by I/O devices or vice-versa [12]. V x 1/n V x n input output level-shifter level-shifter Figure 3.5: Low-power technique by reducing the internal bus swing. Through the layout technique, the place and route should be optimized such that signals that have high switching activity should be assigned short wires and signals with lower switching activities can be assigned longer wires. 3.2.3 Power Reduction through Architectural Design At the architecture level, several approaches can be applied to the design flow: Through the minimizing the number of component and block, power consumption can be reduced by reducing the chip function and utilizing the resource- sharing concept. Through the power management techniques, the chip architecture partition with selectively enabled blocks can be divided. 50 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Through the parallelism (concurrency), in CMOS image sensors, an analog signal chain and on-chip ADC can be built as pixel-parallel, column-parallel, serial fashion. Possible ADC architectures are shown in Figure 3.6. Generally the conversion is performed either in a column-parallel fashion or serially. The column- parallel approach has the advantage of using slow converters to achieve a high conversion rate, while high-speed serial ADCs enable a smaller chip size. data serial column column digitized ADCs parallel parallel in pixel (at data ADCs ADCs rate) (multiplexed output) (parallel ports) (a) (b) (c) (d) Figure 3.6: ADC architectures, (a) pixel-parallel output, (b) serial output, (c) column- parallel multiplexed-output. (d) column-parallel parallel-output. 51 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. For example, the power estimator for high-speed ADCs is estimated as follow [37]: V 2 .T .(F + F t d d min V x s a m p le s i g n a l * / o 1 r \ \ P ° Wer = l 0 (-O.l525-flWjWii38i) ( 3 -1 0 ) where V d d is the power supply voltage, Lm in is minimum feature size, Fsa m p ie is the sampling frequency, Fsig n a i is the signal frequency, and ENOB is the effective number of bit of ADC. Power Consumption 70 60 ■high-speed ADC at 1.5V ‘low-speed ADC at 1.5V high-speed ADC at 3.3V low-speed ADC at 3.3V .2 40 o 30 50 30 40 0 10 20 Sam pling F requency (MHz) Figure 3.7: Power estimation for low and high-speed ADCs. 52 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. From the Table 2.2 in Chapter 2, as the rule of thumb, we learn that the power consumption of low-speed ADCs is one order of magnitude less power than high ly2 I -(F + F 1 i a t ^ / - i 1 ■ i • dd min V -1 sam ple s ig n a l/ / speed ADCs, which is power = --------------------------- • Ener§y (Power Per Kconversion) is ~1 pW/Kconversion for high-speed ADCs and -0.1 pW/Kconversion for low-speed ADCs. For the given speed constraint, if we adopt low-speed ADCs in parallel fashion, we can reduce one order of magnitude less power than high-speed ADCs. Also parallel output ports for a high-speed image sensor can be easily built. Figure 3.7 shows the power estimation for low- and high-speed ADCs with ENOB = 7. Through the pipelining, power consumption can be reduced. There are three areas: analog signal chain, ADC, and DSP. Typical example is a pipelining ADC [11], This ADC uses a technique similar to digital circuit pipelining in DSP to trade latency for throughput. Only a few bits are resolved at a time. This approach increases the throughput and reduces the required number of comparators compared to a flash converter. Through the data representation with a switching activity as low as possible, the operation for decoding and execution can be reduced. In the row/column select logic, there are two design choices: NAND-logic and shift register array type. Although window and random access functions is sacrificed, which are not necessary in a small-format image sensor, shift register array type reduces the number of global buses. Gray coding as another addressing method reduces the switching activity of global buses. Because the code words for adjacent levels are separated by a Hamming distance of 1 [64]. Table 3.4 shows 3-bit binary and gray code. Gray code can be 53 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. obtained from binary code as follows: gray(2 ) <= binary(2 ), gray(l) <= binary(2 ) xor binary(l), and gray(0) <= binary(l) xor binary(0). Also it is possible to introduce redundancy into the data representation and thus reduce the number of signal changes [63]. Table 3.4: 3-bit binary code and gray code. Decimal No. Binary Code Gray Code 0 0 0 0 0 0 0 1 0 0 1 0 0 1 2 0 1 0 0 1 1 3 0 1 1 0 1 0 4 1 0 0 1 1 0 5 1 0 1 1 1 1 6 1 1 0 1 0 1 7 1 1 1 1 0 0 3.2.4 Power Reduction through Algorithm Selection At the algorithm level, the techniques to minimize power are: Through the complexity of the selected algorithm, the operation and hence the number of hardware resources can be minimized. Typically a color process in color image sensors recovers 24-bit RGB data from Bayer-pattem subsampled RGB color. This color process requires automatic black balance, color interpolation, automatic white balance, aperture correction, gamma correction, and color space conversion 54 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [62]. So in this color process, the different configurations to generate the reasonable color quality can be chosen. Through the signal correlation such as locality and regularity, power can be reduced. An important property of images is that neighboring pixels often have similar values. Images are normally stored, transmitted, and processed in a line-by- line format, therefore the dependency from the previous pixel in the same line influences bit-changes. In predictive coding schemes, the next input is predicted based on the digital coded in the past. The simplest form of prediction can be achieved by using differential methods. The idea behind differential methods is to encode the value of the difference between the previously encoded pixel and current pixel. Due to the spatial correlation existing in a natural image, the resulting values to be encoded have a lower dynamic range than the original values. Through minimizing the number o f operations, the switching activity can be reduced by optimizing the ordering of operations. Consider the problem of multiplying a signal with a constant coefficient, which is a very common operation in color image processing. The multiplication with constant coefficients can be optimized by decomposing the multiplication into shift-add operations. The basic idea is that a multiplication with 0 is a “no-operation” and a multiplication with a constant degenerate to shift-add operations corresponding to the l ’s in the coefficient. For example, equation (3.11) shows the color balance and equation (3.12) shows the color conversion between the RGB color space to the YIQ color space. 55 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. ~Rr G' = B' Sr gg b . b„ g b R G B (3.11) Y y r y s y b ~ R I - K h G (3-12) Q. Q r qz qK B Through the compression [1,31], data communication rate can be reduced. In this case any block-based compression processing can not be considered which tends to be very compute intensive and power-hungry, rather simple differential method. Aizawa et al. [1] describe an image sensor which comprises sensor level compression. The compression significantly reduces the amount of image data to be read out. The compression algorithm is based on conditional replenishment [29], in which the pixel value is compared with the previously sampled and stored value. If the result of comparison exceeds a threshold, the activate signal is activated which controls the scanning logic, when the row containing that cell is scanned. The scanning logic bypasses all inactive cells and only reads out the pixel value of activated cells, hence reducing the scanning time. Through the approximation signal processing, power can be reduced by allowing some inaccuracies in the computation [6 6 ]. An 8 -bit signal processing is normally required in the ADC and digital signal processing for an 8 -bit CMOS image sensor output. This signal processing can be scalable in order to obtain different 56 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. power consumption and quality operating points. Many display devices nowadays still allow a limited number of colors, called color palette, to be displayed simultaneously. For example, Liquid crystal display (LCD) typically shows 6 -bit resolution, so we can reduce the number of bit-operation. Besides, images and videos in most World Wide Web databases are in compressed formats. Therefore, it becomes an important issue to retrieve a suitable color palette from compressed domain in order to have fast and faithful color reproduction for these devices. S. Pei et al. use the reduced, rather than the whole, image for the color palette design to avoid the heavy computation in image or video decompression [56], 3.2.5 Power Reduction through System Integration The system level is also important to the whole process of power optimization. Some techniques are: Through utilizing the low system clock, we can adopt a phase-locked loop (PLL) technique to generate the high-speed clock [73]. Normally CMOS image sensors can be operated at the pixel clock rate. However if we use the serial ADC, it requires the high clock frequency. Through the hardware and software partitioning, it attempts to find an optimal partitioning assignment of tasks between hardware and software [27], It is possible to reduce power dissipation by off-loading computation to servers that do not have energy constraints. For example, consider an image sensor that performs data compression before transmission to servers. Most video compression algorithms use 57 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. form of block-based motion estimation/compensation to remove the temporal correlation inherent in natural video sequences. Unfortunately these algorithms tend to be very compute intensive, resulting in significant power drain. So most low-power sensors do not perform data compression, except bandwidth-limited wireless sensor systems. Through designing as System-on-a-Chip (SOC), we can reduce the overall system pin requirements by combining functionality into SOC through the use of multi-chip modules, bumped chip-on-board (COB), and other creative solutions. We can integrate off-chip memories and other ICs such as digital and analog peripherals. At system level, Poweriy m m ic = 'nCVddV ppf , off-chip buses have capacitance C that are orders of magnitude greater than those found on internal signal lines in a chip. Therefore, transitions on these buses result in considerable system power dissipation. Hence, the signal-encoding approaches in literature achieve power reduction by reducing transition probability 77 while keeping C more or less unaltered [58,63]. Through the system can be operated without a voltage regulator, there will be greater savings, but then a more variable supply voltage must be tolerated, possibly along with a variable clock speed. Current solution use charge pumping techniques and voltage regulation to generate the higher voltages [11]. This approach increases, however, the complexity of the system and thereby its cost, size, and weight. It also increases the energy requirements, since charge pumping and voltage regulation will not be achieved without losses. To optimize the consumption of electric charge, it is 58 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. desirable to use integrated circuits able to operate directly from the voltage provided by the battery. Through the video codec processing, before we send a pulse code modulation (PCM) signal, however, it may be put into several different forms. Choice of these different ways of conveying the binary code information will depend on the type of modulation and demodulation employed and other constraints on bandwidth, receiver complexity, etc. In addition to the video codec processing, a wireless device is constrained by the fundamental principles of RF data transmission, which dictate that the minimum transmitter power required to transmit data at a rate B is given by [30] P = ~ -M ■ SNR kB T ■ FrB s r (3-13) where A is a factor containing circuit losses divided by antenna gain, sx is the efficiency of the transmitter’s power amplifier, M is extra signal margin to accommodate fading, SNR is the required SNR for adequate signal detection, k is Boltzmann’s constant, T is the temperature of the receiving antenna, F is the noise figure if the receiver, d is the distance between transmitter and receiver,/is the carrier frequency, and c is the speed of light, respectively. Figure 3.8 summarizes low-power design steps from process technology to system integration. 59 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. System Integration Architectura Design Algorithm Selection Circuit/Logic Design Process Technology L ow S y s te m C lock H ardw are an d S o ftw a re P artitioning D y n a m ic P o w e r M a n a g e m e n t S y ste m -o n -a -C h ip (S O C ) P o w e r M a n a g e m e n t P a rallelism (C on cu rren cy) P ip elin in g S ig n a l C orrelation D ata R ep resen ta tio n O ptim ization A ctivity-driven P o w er-d o w n C lev er C ircuit T e c h n iq u e L ayout T e c h n iq u e D e v ic e /P o w e r S c a lin g A d v a n c e d In tercon n ection M ultiple T h resh o ld C M O S C om p lexity L ocality a n d R egularity C o m p r e ssio n A pproxim ation S ig n a l P r o c e ssin g D ata R ep resen ta tio n Figure 3.8: Low-power design steps. 60 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 3.3 Battery-Operated Sensor Considerations Applications requiring a high computational throughput, such as laptop computer will typically rely on special rechargeable battery solutions. The price range of such systems allows a customized solution, and the weight of the battery, although a considerable part of the total weight, can still be in hundreds of grams. As opposed to that, other portable systems, such as pagers or entertainment systems, will typically use inexpensive, ubiquitous alkaline disposable batteries. Voltage (V) 0.9 critical load Load (pA) Figure 3.9: Typical battery load characteristic. Such a battery is composed of one or several electrolytic cells, each of them generating 1.5 V typically at the beginning of its lifetime. The voltage of the alkaline battery varies during its life span, from initially 1.5 V to around 1.2 V, where it stays 61 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. most of its useful time. When the voltage reaches 0.9 V, it stays decaying rapidly, and the battery has to be replaced. It is important to notice that the variation of the supply voltage can be expressed as a nominal 1.2 V affected by a 25 % variation, a domain significantly wider than the usual 5 % - 10 % tolerances in typical logic integrated circuits. So a battery-powered image sensor should have more than 25 % tolerance. Alkaline-battery manufacturers specify the discharge life of their batteries under constant power load. But unlike nickel-metal-hydride and lithium-ion rechargeable batteries, the discharge voltage of alkaline batteries is not constant and tends to vary from 1.5 to 0.9 V per cell. Figure 3.9 shows typical battery load characteristic for different power loads. So the peak current reduction should be considered in battery-operated CMOS image sensors. Overall the properties of the battery as a power supply need to be well understood in order to design efficient solutions for battery-powered CMOS image sensor systems. To optimize the consumption of electric charge, it is desirable to use integrated circuits able to operate directly from the voltage provided by the battery without any voltage regulator as mentioned before. The imagined two-way wireless video wristwatch is an example of an application that would need to run on very low-power. For present-day wristwatches, with their year or longer battery life, the power dissipation is generally <10 pW. Even with larger higher energy-density batteries, and/or more frequent replacement, it would still appear desirable to keep the average dissipation below a few hundred pW. This power level will probably be possible, but only for a low resolution image sensor 62 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. is considered. If the sensor were only use 5 % of the time, then an active power of roughly a couple of mW could be tolerated. 3.4 Summary Power dissipation is a prime design constraint for portable systems. The low- power design requires optimization at all levels - technology, circuit and logic, architecture, algorithm, and system integration. The low-power design methodology in CMOS image sensors has been described from process level to system level in this chapter. We do not talk about color image processing, compression, and communication in detail. Low-power techniques for these areas are beyond the scope of this research, however, many research works can be found. Here the focus is the low-power design methodology for a generic CMOS image sensor. 63 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Chapter 4 A 1.2 V Micro power CMOS Active Pixel Image Sensor Low-power image sensors are highly desirable for portable applications, including cellular phones, portable digital assistants (PDAs), and wireless security systems, since low-power consumption is a fundamental demand for these battery- operated portable equipment. Using standard CMOS processing technology, active pixel sensor (APS) designs can create miniaturized systems-on-a-chip by integrating signal processing and timing control on the imaging chip. In traditional CCD based imaging systems, multiple chips are usually required to generate timing signals, gain, and analog-to- digital conversion for the sensor. In addition, because these functions are contained on-chip in APS designs resulting in smaller interconnect capacitances, lower power imaging systems can be designed. Shrinking CMOS feature size allows a pixel to contain transistors to buffer the pixel output and enables individual pixel row addressing. The X-Y addressable sensor architecture also allows more user flexibility in how the sensor is scanned out and allows a high degree of parallel signal processing. Because the sensor is designed in standard CMOS, application specific functions can be easily embedded into the sensor and prototyping designs can occur 64 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. rapidly at relatively low cost. Scaled CMOS technology has bright prospects for both high-speed and low-power image sensor systems [21], This chapter presents an image sensor that is designed for 1.2 V operation and dissipates one-to-two orders of magnitude less power than current state of the art CMOS image sensors. The image sensor architecture and detail analog and digital building blocks in this micropower CMOS active pixel image sensor are described. 4.1 Sensor Chip Architecture The sensor’s block diagram is shown in Figure 4.1. The core pixel array consists of 176(H) x 144(V) photodiode active pixels [quarter common intermediate format (QCIF)] with a 5 pm pitch. The array of pixels is accessed in a row-wise fashion using a shift register and row driver with a reset bootstrapping circuit so that all pixels in the row are read out into column analog readout circuits in parallel. Each of the 176 column-parallel readout circuits performs both sample-and-hold (S/H) and delta double sampling (DDS) functions, eliminating pixel offset variations and pixel source-follower 1/f noise. The signal is stored in the charge domain. The global charge-sensitive amplifier at the front end of the ADC provides a fixed gain for the column charges being read using the column select logic. The amplifier reset and the amplifier signal values are sent to the 8-bit self-calibrating successive approximation ADC. The ADC generates the 8-bit digital output. The digital timing and control logic block generates the proper sequencing of the row address, column address, ADC timing, as well as generates the synchronization pulses for the pixel data going off- 65 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. chip. The on-chip clock generator generates an internal clock with on-chip bandgap reference circuitry for the timing and control logic. bandgap reference and clock generator digital timing and control logic d ata 4- u. O row > shift T3 > register > 2 phrl phr2 p h r jn rs tb o o s t rst en reset boot strapping logic rst#0, row#0 (0,0) photodiode APS array (176x144) p ix_col#175 ADC strobe s h r, s h s p ix _ co l# 0 vin_en column analog signal chain s a m p ie jn (sample-and-hold) col out co l# 1 7 5 col#0 se lec t column driver column shift register* p h cl • phc2 p h c jn g lo b al_ o u t global signal logic opam p_rst opam p en biases (Vln,Vref) Figure 4.1: Sensor’s block diagram. 6 6 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. serial digital port 4.2 Analog Building Blocks A signal path from pixel to ADC is shown in Figure 4.2. The design of a low- voltage CMOS sensor involves several well-known challenges such as a) the reduced dynamic range of circuits, b) the low-voltage MOS switch problem, c) low-voltage opamp and ADC design, and d) low-power internal bias generation. The challenges discussed above are addressed in the following way: a) the pixel voltage dynamic range is increased by using a bootstrapped reset pulse; b) the column analog readout circuit is designed so that only unipolar MOS switches are required. For instance, the S/H switch is of an n-type and is good for sampling pixel signals that are always “low”. While the column select switch is of a p-type, which is good at connecting high level signals, such as for the reference voltage, etc.; c) the charge mode readout fixes the readout bus voltage so that the requirements on the amplifier input voltage swing are relaxed. On the other hand, a current-mirror OTA used in this design yields almost rail-to-rail output; and d) a capacitive ADC is selected so as to avoid some low voltage design problems that would be faced with different types of ADC such as flash, pipelining, or folding converter. 67 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. global amplifier opam p_rst pixel Vdd shr s a m p le jn col rst C2 m . T vln ? n — I! row : Vlrj — J| sh s - ± z shs I - + strobe opam p_en column signal chain ADC block 6 Vref Figure 4.2: Pixel to ADC signal path. Figure 4.3 shows the relative timing for row and column operation of the sensor. At 20 frames per second, the sensor is clocked from a 16.5 MHz source. The total row time at this frame rate is 352.96 psec ((192 + 176 x 32) clocks). This period is divided between the time required for column analog operations (192 clocks) and the ADC conversion time (32 clocks). The analog readout sequence starts with the selection of a pixel row, whose output is sampled onto the column S/H capacitor in parallel. Each ADC processing time is the global S/H (16 clocks) and the ADC conversion (16 clocks). At the full 20 Hz frame rate, the ADC used in this sensor needs to operate at 500 Ksamples/sec. The ADC is calibrated at the beginning of the very first frame for compensating the DC offset at the input of the comparator. 68 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. row rst vln_en s a m p le j n o p am p _rst shr _ shs __ col 120 clocks I I J i ll ii ii IN 1 clock H i 5 clocks Figure 4.3: Relative row and column timing. In addition, the following measures have been undertaken to reduce the sensor power. First, unused blocks such as the pixel current load in the column circuit, and the comparator and opamp in the ADC have been cut from power for the time they do not operate. Second, column readout circuits receive the reference voltage from the readout opamp, eliminating the need for a power consuming reference voltage generator. In this case, the reference voltage is loaded only onto the high impedance opamp input, so the Vref voltage source can be implemented as a high-resistance one. Third, the column S/H circuit does not have an active buffer. It was replaced with a passive capacitor storage. 69 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 4.2.1 Pixel photodetector w avelength range: 350 nm up to 1000nm row selec' amplifier Vdd ± 1 Vdd pixel output column output pixel reset pixel select p- epi column output P+ substrate (0V) Figure 4.4: CMOS active pixel sensor layout and schematic. A photodiode pixel is the sensing structure used for the micropower image sensor. There are two kinds of photodiode structures often used in a standard CMOS process. The traditional n+ diffusion/p-substrate (or p+ diffusion/n-substrate) photodiode has a short minority carrier diffusion length. Electron-hole (e-h) pairs generated in the n+ diffusion region will most likely recombine before they are captured by the depletion region. The shallow diffusion substrate junction is also harmful for red light absorption because red light can penetrate beyond this depletion region. A second photodiode structure can be implemented with a well substrate 70 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. junction. This photodiode has a longer diffusion length in the well, but its blue light response is poor. Blue light will generate e-h pairs near the Si surface. Since the well substrate junction is deeply buried, e-h pairs will most likely recombine at the surface. According to our measurement results, the N-well photodiode shows lower dark current, and higher quantum efficiency for imager application. Increasing the amplitude of the rst signal to Vdd + threshold voltage using the bootstrap switch circuit adopted from [61] should increase the pixel reset voltage and extend the pixel dynamic range. The biasing for each column’s source-follower is IpA permitting charging of the sampling capacitors in the sampling time. The source-followers can then be turned off by cutting the voltage on each load transistor (not shown in Figure 4.4, see Figure 4.2). The horizontal blanking interval is less than 2 % of the line scan readout time, so that the sampling average power dissipation Power corresponds to: Power = n-I -V -d (4.1) where n is number of columns, I is the load transistor bias, V is the supply voltage, and d is the duty cycle. Using «=T76, 7=1 pA, V=l.2 V and d=2 %, a value for Power of 4.2 pW is obtained. 4.2.2 Analog Signal Chain We consider the minimization of power consumption in analog signal chain. For active elements, we can refer to the switched capacitor integrator whose schematic is shown in Figure 4.5. 71 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [global amplifier ! opamp_rst col sa m p le jn Pixel (Vsig, Vrs|) vlnjen— | yin— | •Vout opam p_en Vref column signal chain Figure 4.5: Analog signal chain. C C Input capacitor is Cin (- — ) and C/ is feedback capacitor, respectively. Cj + C2 Most of the power is dissipated in the integrator amplifier, therefore, it is important to minimize their current consumption for given clock frequency, load, and parasitic 72 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. conditions. The simplest amplifier structures are generally the most power efficient, as there are fewer branches contributing to noise and power. Two basic relations determine the influence of the sampling frequency on total power consumption. First, the condition linking the closed loop time constant and the inverse of the sampling frequency. Second, the relation defining the current required to achieve a given transconductance. In the row time, first, the photogenerated signal Vsig is read out of the pixel and stored Vref-Vsig at the capacitor Cin . In the second, after reset the pixel, Vrst applied to the Cin . These signal and reset values are subtracted at Cin . The opamp yields r V = V , ---- ‘ — (V. - V ) + ' out r e f Vr sig ’ rst > Lf r c A 1 + -^ v c/y (4.2) 1 + A „ where V 0 ffset is the opamp offset voltage and A0 is the open-loop gain, respectively. 4.2.3 Global Amplifier The major problem in the realization of micropower operational amplifiers is obtaining a reasonable speed and an acceptable dynamic range. Battery operation allows relaxed requirements on the power supply rejection, since the power noise only comes from the chip and can be more easily filtered out at very low current. [24] A key factor for power reduction is the avoidance of any compensation capacitor other than the load itself, which is only possible if the major part of the voltage gain is achieved at the output node, that is by using single-stage operational 73 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. transconductance amplifiers. A simple OTA is shown in Figure 4.6. The frequency response of the OTA is shown in Figure 4.7. Vdd M8 M4 M3 M6 out ln+ M2 M1 (j ^ ib ias B/C M7 M5 Figure 4.6: Operational transconductance amplifier. Assuming a ratio B of the mirror M4-M8, the overall transconductance of the amplifier is g m = Bgm H 2)- (4‘3) The circuit, loaded by Q , behaves essentially as an integrator with a time constant 74 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. =CL/g m (4.4) which is the inverse of the unity gain frequency/,. The very low frequency 1/ tj of the dominant pole depends directly on the DC gain A0. Also the slew rate is Bibia s /Ci. |A| (log) Ao 1 / t , 1/t, 1 / t , © (log) Figure 4.7: Frequency response of the OTA. Let us assume a single non-dominant pole with time constant r p. For frequency much larger than 1/tj, the voltage gain of the amplifier can be expressed by 75 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. An operational amplifier is usually used in closed loop with an amount j3 of negative voltage feedback. The settling time with a relative residual error s can be calculated with gain given by (4.5). The result may be approximated by [67] Ts =(2Tp +Tu/j3)\n(l/£). • (4.6) If the equivalent input noise resistance of the amplifier is R^, the total output noise in closed loop due to an equivalent white noise resistance Rn can be calculated analytically [24] V2 = AkTR f — ___ = = L.— (4 7) F » N i \ \l Ag + p f PCL ( 4 ' 7 ) where y = g„RN, k is Boltzmann’s constant, and T is absolute temperature, respectively. As shown by (4.7), noise is increased above the theoretical minimum kT/Ci by a factor y//3. It seems interesting to increase the power efficiency of the amplifier by selecting a large value for mirror ratio B. But this would increase y and hence the noise proportionally. Relation (4.7) also shows that any reduction of the feedback factor P below unity increases the noise proportionally. Also reducing the noise by increasing Cl requires an increase in current to maintain the small signal settling time Ts and the slew rate. Input-referred thermal noise are 76 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. K A r g . - ' A f (4.8) 5 where k is Boltzmann’s constant, T is absolute temperature, and Af is the frequency bandwidth, respectively. 4.2.4 Self-Calibration Successive Approximation ADC Vin Vref Comparator N-1 S / H N-bit DAC SAR Timing Control Logic Figure 4.8: Block diagram of the successive approximation ADC. A successive approximation analog-to-digital converter (ADC) employs a “binary search” through quantization levels before converging on the final digital answer [75]. The block diagram is seen in Figure 4.8. The timing control logic 77 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. controls the timing of the N-bit conversion where N is the resolution of the ADC. Vin is sampled in sample-and-hold circuit (S/H) and compared to the output of the digital- to-analog converter (DAC). The comparator output controls the direction of the binary search, and the output of the successive approximation register (SAR) is the actual digital conversion. In most cases, a calibration and offset compensation technique is required to reduce the errors deriving from the binary-weighted capacitor array. They can easily be carried out during the start-up phase. This architecture offers an efficient implementation with micropower level dissipation and fewer active circuit components than other conversion schemes for a given range of conversion bandwidth and resolution. Typically the successive approximation ADC is the most low-power consumption approach. I shr Ihr J shr I sh s □ T T ----- “ P s h s = j= 2N -’C 2N -2C I L ... V rstO 0- C rst VsigO—— | t 2C Comparator Vref O- N - 1 SAR ADC OUTPUT Figure 4.9: The successive approximation ADC using a binary-weighted capacitor array DAC. 78 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. One of the most popular types of successive approximation architecture uses the binary-weighted capacitor array as its DAC. The block diagram of the N-bit successive approximation ADC with the binary-weighted capacitor array DAC is shown in Figure 4.9 as an example. The pixel signal level (Vsig) is sampled onto the signal capacitor bank and reset level (Vrst) onto the sample capacitor. This approach uses binary-scaled capacitors to sample pixel signal voltage, but we can switch to reset voltage, too. These capacitor networks are connected to the input of a comparator. After clamping these levels on the top plate of the capacitors, the bottom plates are successively connected to the ADC reference voltage (Vref). The voltage increase on the top plate is proportional to the relative size of the capacitor to the total capacitance of the network. The comparator output determines which side sees an increase in the top voltage. 1 shr Z h r y shr J L shs □ 1 1 ------ y sh s 2N -1 C 2N-2C l L ... Vrst O O Crst VsigO 1 1 Comparator Vref O- N - f , SAR ADC OUTPUT Figure 4.10: The successive approximation ADC using additional capacitor. 79 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. As you can see, generally we need Vref to determine reference level. That means either we provide the reference voltage from off-chip through pad or on-chip reference circuit. When we consider to implement the micropower level implementation of ADC, Vref generation take big portion of total power consumption. So we develop the means for generating a reference voltage. The idea is that we simply use Vdd (chip power supply) to create Vref level using additional capacitor. In Figure 4.9, LSB (least significant bit) voltage level will be V lSB = V ref/(f -1) = V d d /(f -I) = Vref(C/Cm) (4.9) where Cto , = ( f - l) C and Vref= Vdd. If we increase Cto t, we can adjust V l s b , s o we will have different effective value of Vref For instance, we add 2 ? * ~ lC in the capacitor array as shown in Figure 4.10, we will have V lsb = V r e f /^ + ^ - l) = V d d / ( f ^ ‘-l) = Vref(C/Cto t) (4.10) where Cto t = (2N +2N ~ 1 -1)C and Vref=Vdd. So the effective value of Vref will be about two third of Vdd value instead of Vdd. To gain an idea of the power consumed let us consider the 8-bit successive approximation ADC with a sampling frequency of 1MHz, the power supply Vdd of 1.2V, and the unit capacitance C of 14f (10'1 5 ). We assume that the conversion time Ts = 1/f is equally divided in two time slots for the precharge-sample phase and the eight steps of the conversion phase. A longer precharge-sample period is justified by 80 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. need for precise settling, while using a limited current in the active block. The clock frequency during the conversion step is, therefore fck - 2 x ^ x f s~ 16 MHz. The total power consumption is given by Ptot ~ P p re c h Pconv Pcomp (4.11) where P p rech, P COn v , and P COm p are the power consumption due to the precharge phase, the conversion step and the comparator, respectively. For precharge phase we can obtain = \ ^ C ^ V l , f , = t-2 8 1 4 / 0.82 \M =0.73^»F. (4.12) The current needed to precharge the capacitor array is provided through Vdd. Power consumption due to the conversion phase is given by = ~ 2 = ~ 2 8 -1 4 /-0 .8 2 1M =0J3flW. (4.13) During the conversion, the strobed comparator is working with duty cycle (1/4 x fc k ) and has P com p = V x lx duty cycle = 1.2 x lOpA x 1/4 = 3.0pW. (4.14) The total power consumption Pto t is less than 5 pW. The real implementation of the low-power 8-bit successive approximation ADC is shown in Figure 4.11. The ADC consists of a capacitor bank, a comparator, decision latches, and correction latches. We use the rail supply voltage Vdd as the ADC reference voltage and PMOS switches to connect this voltage. The calibration 81 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. portion of the ADC serves to eliminate the dynamic comparator offset, which is typically 30 mV. It has 5 capacitor bit cells. C om parator I shr 0 ^ J shr J, sh s □-- T sh s V rstO 0 C rst V sig o — 0 strobe " Vdd Vdd Vdd L atch es ADC O U T P U T S— Figure 4.11: Low-power 8-bit successive approximation ADC. The main ADC conversion uses 8 binary-scaled capacitors to sample the amplifier signal and reset capacitor to store the amplifier reset voltage. These capacitor networks are connected to the input of the comparator. After saving these signal and reset voltages on the top plate of the capacitors, the bottom plates are successively connected to Vdd. The comparator output determines whether or not the 82 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. signal side maintains the updated signal in the top plate. For low-voltage operation of this kind of ADC, it is usually essential to have a wide input voltage swing comparator. Although we have designed a special rail-to-rail input dynamic comparator, in this project we have used a different approach. The idea is that, during the convergence process, the variable signal is matched to a fixed amplifier reset voltage. Not only does this allow the use of a limited input swing comparator, but it causes the comparison to happen at the same level each time, eliminating the potential comparator offset vs. signal dependence. 4.2.5 Reference Circuit The reference generators are required to be stabilized over process, voltage, and temperature variations. The bandgap reference is one of the most popular reference voltage generators that achieve the requirement. The bandgap reference adds the forward bias voltage across a pn diode with a voltage that is proportional to absolute temperature (PTAT) to produce an output that is insensitive to changes in temperature [36,69]. The bandgap voltage generated by the bandgap reference circuit is a function of a plurality of sized current mirror devices, the ratio of a first resistor to a second resistor, and the number and relative sizing of bipolar junction transistors used. Figure 4.12 shows the conventional bandgap reference circuit which is composed of a CMOS opamp, diodes, and resistors. A general diode current I versus voltage Vf relation is expressed as 83 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. where Vr is the thermal voltage {kT/q\ k is Boltzmann’s constant (1.38 x IQ”2 3 J/K), T is absolute temperature, and q is electronic charge (1.6 x 10'1 9 C)), and Is is the saturation current, respectively. Va Vb Vdd O r R1 Vf1 Va D1 -O Vref R2 Vb R3 Vf2 ii-i D2 Figure 4.12: Conventional bandgap reference circuit. 84 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. In the conventional circuit, a pair of input voltages for the opamp, Va and Vb, are controlled to be the same voltage. Vf 1 o c -2mVI° C,VT o c 0.086mV/' C, dVf = Vfl - Vf2 = Fr • In N-R2 R\ . (4.16) R 2 Fre / = F/1 + — dVf^Vrefc o n v , K5 where fy/ is the built-in voltage of the diode, and dVf is the forward voltage difference between on diode D1 and N diodes D2 with proportional to the thermal voltage, respectively. Vdd V a —O V ref Vb Vb V a R 2 V f2 D 2 Figure 4.13: Low-voltage bandgap reference circuit. 85 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. The output voltage of the conventional bandgap reference is 1.25 V. This fixed output voltages of 1.25V limits the low-voltage operation. So we need to think about a bandgap reference that can successfully operate with sub-1.2 V supply [5], Figure 4.13 shows the low-voltage bandgap reference circuit. The output voltage of this low-voltage bandgap reference circuit becomes Therefore Vrefi0 w _ y0 ita g e can be freely changed from Vrefc o n v and Vdd can be lowered below 1 V if the opamp is properly working. 4.2.6 Bias Circuit Figure 4.14 shows bias circuits for the current load for column signal chain and the reference voltage for global amplifier. Because of power and area constraints, we use a simple voltage divider. The Vln is the bias voltage for each column’s source- follower which is 1 pA permitting charging of the sampling capacitors in the sampling time. The Vref is the reference voltage for OTA as shown in Figure 4.5. V refloW v o l ta g e = R 4 (Vfl dVf\ RA . -Z—+ —— = — Vrefc kR2 R 3 , R2 c o n v ' (4.17) 86 with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Vdd Vdd O ■ i|— Hi: ■ i t — C b ia s_ e n 1 — o | r Vln o b ia s _ e n 2 — -o| f V ref o- ■ q i (a) (b) Figure 4.14: Bias circuits for (a) Vln and (b) Vref. 4.2.7 Power-On-Reset A power-on-reset circuit provides the stable generation of a reset signal without being affected by the rising characteristic of a power-supply voltage as shown in Figure 4.15. This power-on-reset circuit includes two MOSFETs {Ml and M2), a capacitance (Q , and two inverters. In the power-on-reset circuit, M l is acted as the resistance R with a large threshold voltage and M2 is a pull-down switch, the reset signal is determined by the difference of threshold voltages between M l and M4 in the first inverter and RC constant of M l and C. 87 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Vdd M3 M1 M4 Figure 4.15: Power-on-reset circuit. 4.2.8 Bootstrapping Switch Doubling the amplitude of the clock signals should suffice to turn on transfer switches even under 1.2 V. Figure 4.16 shows a schematic of the bootstrap switch circuit adopted from [61], In the low state (GND) of input, the capacitor C is charged to Vdd and the output output is low. When input is the high state (Vdd), the top plate of capacitor would reach a voltage equal to 2 Vdd. The output would reach a voltage given by partition between C and Cio a d where Cio a d is all capacitive loading. 88 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. input > Vdd O M1 r3 l°— i H l M 3 I - output Qoad Figure 4.16: Bootstrapping circuit. 4.3 Digital Building Blocks 4.3.1 Row and Column Select Logic Generally there are two basic types of selecting row/column addresses: NAND-type and shift register array-type. The row and column addresses in the micropower are controlled by the shift-register configuration using two nonoverlapping clocks 01, 0 2 as shown in Figure 4.17. There are a couple of advantages, although we sacrifice the window and random access functions, which are not necessary in the small-format image sensor. First, it reduces the number of global 89 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. buses. Second, it reduces silicon area. Third, the most important advantage is to reduce power consumption. Vdd S e l in out rst Figure 4.17: Dynamic shift-register 4.3.2 Row Driver The row driver gets the rowsel signal from row shift register array and generates row and rst signal with rowen and rsten as shown in Figure 4.18. The bottom of the Figure 4.18 shows a schematic of the bootstrapping switch circuit. In the low state of rst in, the capacitor C is charged to Vdd and the output rstboost is low. When rst in is Vdd, the top plate of capacitor would reach a voltage equal to 2 Vdd. The output would reach a voltage given by partition between C and capacitive loading on selected pixels and row driver. Increasing the amplitude of the rst signal as Vdd + threshold voltage should increase the pixel reset voltage in order to extend the 90 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. pixel dynamic range. So the pixel voltage dynamic range is increased by using this bootstrapped reset pulse. row sel - row e n row H R Vdd rstb o o st M1 ~ q | o — > H f rst in HE-J M3 Figure 4.18: Row driver with bootstrapping switch. 4.3.3 On-Chip Clock Generator We use the 3-stage ring oscillator as on-chip clock generator as shown in Figure 4.19. Clock power Vclock comes from on-chip bandgap reference circuitry as 91 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. shown in Figure 4.13. The signal rsth is generated by power-on-reset circuitry as shown in Figure 4.15. V clock 8 9 C lock rstb X Figure 4.19: On-chip clock generator. 4.3.4 Timing & Control Block The digital timing & control block performs a number of operations. It generates the proper sequencing of the row address, column address, ADC timing, as well as the synchronization pulses for the pixel data going off-chip. This block was written in a hardware description language and synthesized with a standard cell library. This standard cell library is optimized at 1.2 V as shown in Table 4.1. 92 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Table 4.1: 1.2 V standard cell library. Logic Symbol Description INV Invxl inverter (small) Invx2 Inverter Invx3 Inverter Invx8 Inverter NAND Nand2xl 2 input NAND Nand3xl 3 input NAND Nand4xl 4 input NAND NOR Nor2xl 2 input NOR Nor3xl 3 input NOR Nor4xl 4 input NOR AND And2xl 2 input AND And3xl 3 input AND And4xl 4 input AND OR Or2xl 2 input OR Or3xl 3 input OR Or4xl 4 input OR BUF Bufxl buffer (small) Bufic2 Buffer Bufx3 Buffer Bufx8 buffer (large) XOR Xor2xl 2 input exclusive OR XNOR Xnor2xl 2 input exclusive NOR MUX Mux2xl 2 input multiplexer DFF Dffr D flip-flop with reset Dffs D flip-flop with set Tdffr test-equivalent D flip-flop with reset Tdffs test-equivalent D flip-flop with set R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 4.3.5 D ata Coding Logic FF i:i; Fi; FF FF FF FF FF FF PQ AO 00 AO xx xa xx xx ’ ()() 00 00 00 \x xx xx xx Row benders 00 00 00 00 xx xx xx xx 00 00 00 00 xx xx xx xx Calibration Valid Image BO BO xx x\;B 0 BO xx xx! BO BO xx xx xx xx BO BO BO BO Figure 4.20: Data format: Data format is shown in Figure 4.20. The header of a frame is “00A000A0” in hexadecimal format, which is “00000000101000000000000010100000” in binary format. The header of a row is “00000000” in hexadecimal format, which is “00000000000000000000000000000000” in binary format, and the tail of a row is “BOBO” in hexadecimal format, which is “1011000010110000” in binary. Before we send the 8-bit serial data signal, however, it may be put into several different forms. Choice of these different ways o f conveying the binary code information will depend on the type of modulation and demodulation employed and other constraints on bandwidth, receiver complexity, etc. 94 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. The non-retum-to-zero (NRZ) representations reduce the bandwidth needed to send the pulse-code modulation (PCM) code [64]. In the NRZ(L) representation a bit pulse remains in one of its two levels for the entire bit interval. In the NRZ(M) method a level change is used to indicate a mark (a 1) and no level change for a 0; the NRZ(S) method uses the same scheme except that a level change is used to indicate a space (a 0). Both of these are examples of the more general classification NRZ(I) in which a level change (inversion) is used to indicate one kind of binary digit and no level change indicates the other digit. The NRZ representations are efficient in terms of the bandwidth required and are widely used. We choose NRZ(L) representation. Note that use of NRZ representations requires some added receiver complexity to determine the clock frequency. 4.4 Summary To conclude, we have designed a prototype micropower 1.2 V operated CMOS image sensor. This image sensor could be powered by a watch battery for a few months. The image sensor architecture and detail analog and digital building blocks in this micropower CMOS active pixel image sensor were described. Power dissipation was the prime design constraint for this micropower image sensor along with a small die size. Low-power design methodology was considered at all levels - technology, circuit and logic, architecture, algorithm, and system integration. Low-power design techniques in this image sensor have been described. Figure 4.21 summarizes low- power design steps from process technology to system integration in this research. 95 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. System Integration v . Architectural Design Algorithm Selection ✓ v, Circuit/Logic Design Process Technology System -O n-a-C hip (SOC) - autonom ous im age se n so r On-chip clock generator W ide-voltage operation Minimizing the number of circuit block Power m anagem ent Column parallel analog signal chain Shift register array decoding Complexity - off-chip color processing NRZ Data Representation Low-voltage (1.2 V) operation Optimization - analog signal chain, opam p, su c c e ssiv e approximation ADC Activity-driven power-down - clock-gating Custom timing & control block C lever circuit technique - bootstrapping switch, level-shifter 0 .3 5 pm CM OS 2P 3M Figure 4.21: Low-power design steps in this research. 96 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Chapter 5 Experimental Results This chapter describes measurement results for the first and the second- generation micropower CMOS image sensors. The first chip has extra facilities for testing and optimizing the principal blocks. The results of the first run have been summarized to design a product-grade miniature sensor in the second stage of the chip development effort. This advanced miniature sensor can have just 3 external pads. First, the first-generation micropower CMOS image sensor is characterized. Second, the second-generation micropower CMOS image sensor is characterized. Also the measurement results of the on-chip clock generator is discussed. Third, the first and the second-generation sensors is compared, and the comparison of the first and the second-generation sensors with previously reported image sensors in terms of new power figure of merit described in Chapter 3 is shown. Fourth, the scalability for different image format is discussed. 5.1 First-Generation Image Sensor 5.1.1 First-Generation Sensor Micrograph The first-generation sensor is implemented in a 0.35 pm, 2 P, 3 M 3.3 V CMOS with Vtn = 0.65 V and Vtp = -0.85 V [12], A micrograph of the first- generation image sensor core is shown in Figure 5.1. The size of the sensor core is 97 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. about 1.2 mm x 1.2 mm, which includes the pixel array, row/column logic, analog readout, ADC, and biases. The overall chip size is about 4.2 mm x 4.2 mm mainly because of 82 pads that include some externally controlled input pins such as a master clock and ADC controls. This chip is packaged by 84-pin pin grid array (PGA). foximation C -.D C (176x144) Column Analog Signal Chain Column Select Loai Figure 5.1: Sensor core micrograph of the first-generation image sensor. 98 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.1.2 First-Generation Sensor Characterization The sensor chip has wide voltage-range operation from 1.0 to 3.6 V. Table 5.1 summarizes the sensor chip characteristics at 6.5 frames per second (fps) and 1.2 V power supply. The measured ADC differential nonlinearity (DNL) is 1 least significant bit (LSB) and the integral nonlinearity (INL) is 2 LSB. Table 5.1: Specification and measured sensor performance at 1.2 V and 6.5 fps. Technology 0.35 pm, 2 P, 3 M CMOS Pixel array size 176(H) x 144(V) (QCIF) Scanning Progressive Pixel size 5 pm x 5 pm Pixel type Photodiode APS Pixel fill factor 30% Sensor core size 1.2 mmx 1.2 mm Sensor output 8-bit parallel digital On-chip ADC 8-bit single successive approximation ADC DNL/INL 1 LSB / 2 LSB Conversion gain (pixel PD-referred) 20 pV/e- ADC conversion gain 2.8 mV/LSB Dark signal 4.64 LSB/sec or 13 mV/sec or 650 e-/sec Saturation (pixel PD-referred) 214.3 LSB or 600 mV or 30,000 e- Noise 0.43 LSB or 1.2 mV or 60 e- r.ms. Operating voltage > V O i o Maximum frame rate 20 fps Maximum pixel readout rate 0.5 Mpix/sec Measured power consumption 48 pW at 1.2 V, 20 fps w/o pads ~ 1 mW with 82 of 3.3 V pads 99 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. The sensor core, which includes the pixel array, row/column logic, analog readout, ADC, and biases, dissipates only 48 (iW at 1.2 V and 20 fps as shown in Table 5.1. Although the goal is a self-clocked sensor with 3 pads (GND, VDD (1.2- 1.7 V), DATAOUT), the first implementation of the sensor includes some externally controlled input pins such as a master clock and ADC controls. To simplify communication with the sensor, we designed I/O pads that perform 3.3 V -> 1.2 V and 1.2 V -> 3.3 V conversion. Because of driving external loads at 3.3 V, the overall chip power consumption is approximately 1 mW. Table 5.2 shows the estimated chip core power portfolio at 1.2 V power supply and 20 fps. Also Figure 5.2 shows the measured sensor core power consumption at 20 fps from 1.1 to 1.7 V power supply with 16.5 MHz master clock. Table 5.2: Estimated chip core power portfolio at 1.2 V power supply and 20 fps. Main components Current (pA) Quantity Average current (pA) Column analog signal chain (vlnA ) 1 176 x (1/50)A A 3.5 Global opamp 20 1 x (1/2)A A 10 ADC (comparator) 10 1 x (l/4 )A A 2.5 Biases (Vln + Vref) 10 1 10 Peripheral (row & col logic + rst bootstrapping circuit + drivers) 14 1 14 Total Current (pA) 40 Total Power (V x I) (pW) 1.2x40 = 48 A peak current: 176 pA A A duty cycle factor 1 0 0 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Power Consumption at 20fps 160 140 120 100 80 60 40 20 0 1.7 1.2 1.3 1.4 1.5 1.6 1.1 Power Supply (V ) Figure 5.2: Measured sensor core power consumption at 20 fps from 1.1 to 1.7 V power supply. 101 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.1.3 Test Images Images taken from the sensor at different power supply voltages are shown in Figure 5.3. I 1 ■ i l i " ! 1 .. I ■ i I Figure 5.3: Test images at different power supply voltages, (a) 1.0 V (b) 1.2 V (c) 1.5 V (d) 2.0 V (e) 2.5 V (f) 3.3 V 1 0 2 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.2 Second-Generation Image Sensor 5.2.1 Second-Generation Sensor Micrograph The second-generation sensor is also implemented in a 0.35 pm, 2 P, 3 M 3.3 V CMOS with Vtn = 0.65 V and Vtp = -0.85 V. The micrograph of the second- generation image sensor is shown in Fig. 5.4. The size of the chip is about 2 mm x 2 mm, which includes the pixel array, row/column logic, analog readout, ADC, biases, on-chip clock generator, timing & control block, and 16 pads. This chip is packaged by 28-pin ceramic leadless chip carrier (CLCC). ■ I 1 P I DATAOUT Column Analog Signal Chain Figure 5.4: Second-generation image sensor. 103 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.2.2 Second-Generation Sensor Characterization The chip can operate autonomously with 3 pads (GND, VDD (1.2-1.7 V), DATAOUT). Also with an external master clock the chip can operate from 1.2 to 3.6 V power supply. Table 5.3 summarizes the sensor chip characteristics at 5 fps and 1.5 V power supply with the external 4.125 MHz clock. Table 5.3: Specification and measured sensor performance at 1.5 V and 5 fps. Technology 0.35 pm, 2 P, 3 M CMOS Pixel array size 176(H) x 144(V) (QCIF) Pixel size and type 5 pm x 5 pm Photodiode APS Pixel fill factor 30% Chip size 2 mmx 2 mm Sensor output 8-bit serial digital On-chip ADC 8-bit single successive approximation ADC DNL/INL 1 LSB J 2 LSB Conversion gain (pixel PD-referred) 34 pV/e- ADC conversion gain 3.5 mV/LSB Dark signal 6.97 LSB/sec or 24.4 mV/sec or 718 e-/sec Saturation (pixel PD-referred) 253.2 LSB or 886.2 mV or 26,065 e- Noise 0.85 LSB or 3.0 mV or 88 e- r.m.s. Operating voltage 1.2-3 .6 V Maximum frame rate 40 fps Maximum pixel readout rate 1 Mpix/sec Power consumption 550 pW at 1.5 V, 30 fps, 16 pads 104 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. The measured power consumption of the overall chip, which includes the pixel array, row/column logic, analog readout, ADC, biases, timing & control block, on- chip clock generator, and pads, is shown in Figure 5.5 with the internal 25.2 MHz on- chip clock (30 fps) from 1.2 to 1.7 V power supply. At 1.5 V, the measured power consumption is about 550 pW. P o w e r C o n su m p tio n a t 30 fp s 0.7 0.6 S 0.5 0.4 I 0.3 0.2 0.1 1.3 1.4 1.5 1.6 1.7 1.2 Pow er Supply (V) Figure 5.5: Measured power consumption at 30 fps from 1.2 to 1.7 V with 25.2 MHz on-chip clock. 105 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. The estimated overall chip power consumption is 547.5 pW at 1.5 V and 30 fps with the internal 25.2 MHz clock as shown in Table 5.4. Note that the timing and control block consumes about half of total power. The measured power consumption of the overall chip is shown in Figure 5.6 with the external 16.5 MHz clock (20 fps) from 1.2 to 3.3 V power supply. Table 5.4: Estimated chip power portfolio with 30 fps at 1.5 V power supply. Main components Current (pA) Quantity Average current (pA) Column analog signal chain (vlnA ) 1.4 176 x (1/50)A A 5 Global opamp 30 1 x (l/2 )A A 15 ADC (comparator) 16 1 x(1/4)A A 4 Biases (Vln + Vref) 16 1 16 Peripheral (row & col logic + rst bootstrapping circuit + drivers) 20 1 20 Clock generator 75 1 75 Timing & control 170 1 170 Dataout 60 1 60 Total Current (pA) 365 Total Power (V x I) (pW) 1.5x365 = 547.5 A peak current: 246.4 pA A A duty cycle factor 106 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Power Consumption 3.5 2.5 O 1 .5 ------ 0.5 2.5 3.5 1 1.5 2 3 Power Supply (V) Figure 5.6: Measured power consumption at 20 fps from 1.2 to 3.3 V with the external 16.5 MHz clock. Figure 5.7 shows the measured power consumption of the overall chip at 1.5 V power supply for different frame rates. Figure 5.8 demonstrates the measured power consumption of the overall chip at 2.7 V power supply for different frame rates. 107 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Power Consumption at 1.5 V 0.7 S 0.5 § 0.4 0.2 5 35 0 10 15 20 25 30 40 Frame per Second (fps) Figure 5.7: Measured power consumption at 1.5 V power supply for different frame rates. Power Consumption at 2.7V 3.5 r 2.5 ! c o a E 3 ( A e o a a. 0.5 5 15 20 25 30 35 0 10 40 Frame per Second (fps) Figure 5.8: Measured power consumption at 2.7 V power supply for different frame rates. 108 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.2.3 On-Chip Clock Generator We use the 3-stage ring oscillator as an on-chip clock generator with clock power from the on-chip low-voltage bandgap reference circuitry. The on-chip low- voltage bandgap reference circuitry generates 0.9V at 1.2 V power supply. Measurement result is shown in Figure 5.9. On-chip clock frequency shows from 26 MHz to 22.5 MHz with respect from 1.7 V to 1.1 V, which is corresponding less than 15 % variation. This means bandgap reference circuitry generates less than 5 mV variation from 0.9 V, which is less than 1 % variation. On-Chip Clock 30 20 X S 1,1 1.2 1.3 1.4 1.5 1.6 1.7 Power Supply (V) Figure 5.9: Measured frequency response of on-chip clock generator. 109 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.2.4 Test Images Images taken with the sensor at 30 fps (25.2 MHz on-chip clock) with 1.5 V and 1.7 V power supply are shown in Figure 5.10. Also images taken with the sensor at 20 and 40 fps with 1.5 V power supply and external clock are shown in Figure 5.11. If you look at the images, you will see column-wise fixed pattern noise. The first- generation image sensor does not have recognizable fixed pattern noise. We believe that the main cause is capacitor mismatching because we reduce the sampling capacitor size by 50 % from the first generation and use a different type of capacitor. When we subtract the dark frame, we get a clean image. (a) (b) Figure 5.10: Test images with on-chip clock at 30 fps. (a) 1.5 V. (b) 1.7 V. 110 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. (a) (b). Figure 5.11: Test images with 1.5 V power supply, (a) 20 fps. (b) 40 fps. Images taken with the sensor at 20 and 40 fps with 2.7 V power supply are shown in Figure 5.12. If you look at the images, you see less column-wise fixed pattern noise at this time. (a) (b) Figure 5.12: Test images with 2.7 V power supply, (a) 20 fps. (b) 40 fps. I l l R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 5.3 Comparison We compare the first and the second-generation sensors as shown in Table 5.5. Table 5.5: Comparison of the first-and the second-generation sensors. First-Generation Second-Generation Technology 0.35 pm, 2 P, 3 M CMOS 0.35 pm, 2 P, 3 M CMOS Pixel array size 176(H) x 144(V) (QCIF) 176(H) x 144(V) (QCIF) Scanning Progressive Progressive Pixel size 5 pm x 5 pm 5 pm x 5 pm Pixel type Photodiode APS Photodiode APS Pixel fill factor 30% 30% Chip size 4.2 mm x 4.2 mm 2.0 mm x 2.0 mm Pin number 82 16 Sensor output 8-bit parallel digital 8-bit serial digital On-chip ADC 8-bit single successive approximation 8-bit single successive approximation ADC DNL/INL 1 LSB / 2LSB 1 LSB/2LSB Conversion gain (pixel PD-referred) 20 pV/e- 34 pV/e- ADC conversion gain 2.8 mV/LSB 3.5 mV/LSB Dark signal 4.64 LSB/sec or 13 mV/sec or 650 e-/sec 6.97 LSB/sec or 24.4 mV/sec or 718 e-/sec Saturation (pixel PD-referred) 214.3 LSB or 600 mV or 30,000 e- 253.2 LSB or 886.2 mV or 26,065 e- Noise 0.43 LSB or 1.2 mV or 60 e- r.m.s. 0.85 LSB or 3.0 mV or 88 e- r.m.s. Operating voltage 1.0 — 3.6 V 1.2-3 .6 V Maximum frame rate 20 fps 40 fps Maximum pixel readout rate 0.5 Mpix/sec 1 Mpix/sec Measured power consumption 48 pW at 1.2 V, 20 fps w/o pads ~ 1 mW with 82 of 3.3 V pads 300 pW at 1.2 V, 30 fps, 16 pads 550 pW at 1.5 V, 30 fps, 16 pads Features external clock & timing on-chip & external biases level shift I/O on-chip clock & timing on-chip & external biases power-on-reset 1 1 2 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Figure 5.14 shows the power consumption of state-of-the-art for CMOS image sensors. It shows the comparison of image sensors presented at ISSCC in previous years [12,14,38,41,50,52,57,62,65,72,75], Q > X 1000 100 10 1 0.1 5 V, 1.2 nm [50] 0.01 5 V, 0.8 urn [38] 5 V, 0.6 urn [14] -54M>.8^m475] 3.3 V,0.5 nm [T2] 3:3 v a_ 61rinr [S5] * X A A=1 ■ A = 2 x A =3 • A = 4 3.3 V, 0.6nm [52] '3.3V, 0.6)a.m [57] 5 V,0.5 |im [41] 1.2 V, 0.35 urn [12] [firs t (w PA D)] X 5 V, 0.35 jim [second] 1.2 V, 0.35 jtm [12] [first (w/o PAD)] 1996 1997 1 9 9 8 19 9 9 Year (ISSCC) 2000 2001 Figure 5.13: Comparison of the first and the second-generation sensors with previously reported image sensors in terms of new power figure of merit (nJoules/pixel). (Note: semi-logarithmic scale). 5.4 Scalability We predict the scalability for different image formats from the experience of this work as shown in Table 5.5. 113 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Table 5.6: Scalability for different image formats at 1.5V power supply. Image Format Architecture Speed Power Consumption QCIF (176 x 144) (25,344 pixels) 1 ADC 30 fps 600 pW 2 ADC(MUX) 60 fps 700 pW 4 ADC + 2 Amp 120 fps 900 pW G IF (3 5 2 x 288) (101,376 pixels) 1 ADC 7.5 fps 700 pW 2 ADC (MUX) 15 fps 800 pW 4 ADC + 2 Amp 30 fps 1000 pW 8 ADC + 4 Amp 60 fps 1400 pW VGA (640 x 480) (307,200 pixels) 2 ADC (MUX) 5 fps 1000 pW 4 ADC + 2 Amp 10 fps 1400 pW 8 ADC + 4 Amp 20 fps 2000 pW 16 A D C + 8 Amp 40 fps 3200 pW 5.5 Summary The measurement results of the first-generation image sensor designed for 1.2 V operation with 48 pW chipcore power consumption to provide 176(H) x 144(V) QCIF 8-bit monochrome video at 20 fps were presented. Low-voltage image sensor techniques have been successfully tried and the possibility has been shown of wide voltage-range operation (1.2-3.6 V). The measurement results of the second-generation image sensor as a self- clocked sensor were also showed. This sensor can be operated with only 3 pads (GND, VDD (1.2-1.7 V), DATAOUT). The measured power consumption of the 114 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. overall chip with the internal 25.2 MHz on-chip clock (30 fps) at 1.5 V power supply is about 550 pW. This image sensor dissipated one-to-two orders of magnitude less power than current state of the art CMOS image sensors. 115 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Chapter 6 Conclusion The problem to be addressed in this work was the development of a micropower CMOS APS that dissipates one-to-two orders of magnitude less power than current state of the art CMOS image sensors and occupies only a few square millimeters in area. This miniature micropower camera-on-a-chip would require such a low-power level that it could be powered from a watch battery for a month or even by a light beam. In order to achieve design goals, a low-power and low-voltage design methodology has been explored and applied throughout the design process from system-level to process-level, while realizing the performance to satisfy design specification. As the first-generation low-power sensor, a micropower 176 x 144 CMOS APS with on-chip 8-bit successive approximation ADC that operates at 20 frames per second from a 1.2 V power supply is implemented. The sensor core that includes the pixel array, row / column logic, analog readout, ADC, and biases, dissipates only 48 pW at 20 fps. Even with 1.2 to 3.3 V level-shifting I/O pads, overall dissipation is less than lmW. The sensor is implemented in 0.35 pm 2 P 3 M 3.3 V CMOS technology. 116 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. As the second-generation image sensor for a self-clocked image sensor, this sensor can be operated with only 3 pads (GND, VDD (1.2-1.7 V), DATAOUT). The measured power consumption of the overall chip with the internal 25.2 MHz on-chip clock (30 fps) at a 1.5 V power supply was about 550 pW. This sensor moves us closer to the realization of the 'Dick Tracy' video watch because of its ability to run on a watch battery and its tiny footprint. It is the world's lowest-power CMOS image sensor, and it is expected that the technology will lead to exciting new kinds of wireless digital cameras. The important technical achievements of the designed CMOS APS Imager chip in this research are as follows: Low-voltage image sensor which is operated below 1.5 V power supply is achieved. The operating power supply voltage of state of the art CMOS image sensors is around 2.5V. Low-power consumption which is below lmW at 1.5 V power supply is achieved through the low-power design methodology developed in this research. This is one-to-two orders of magnitude less power than current state of the art CMOS image sensors. Wide operating voltage range from nominal 1.5 V to 3.3 V not only increases system flexibility but also enables low-power consumption. On-chip ADC, timing generator, control circuit and single voltage input increases system robustness and simplifies the system design by reducing the component count. For example the APS eliminates the need for level shifters, timing 117 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. generator, double correlated sampling and ADC chips which must accompany CCD imagers. As a self-clocked image sensor, this sensor can be operated with only 3 pads (GND, VDD (1.2-1.7 V), DOUT). We reduce the overall system pin requirements by combining functionality into System-on-a-Chip. Low-voltage on-chip 8-bit successive-approximation ADC with self calibration and self-reference has extended accuracy and gives the flexibility of a digital interface and enhances system robustness through noise immunity from pickup and crosstalk. It should be noted that, the developed image sensor that offers unprecedented lowering system power and cost, and increasing system robustness and flexibility for the battery-operated image sensor, is substantially different and far more advanced than earlier sensors reported. Six areas of future work for our image sensor will be discussed. The first is improving the characteristics of our image sensor, such as power consumption, pixel resolution, and fixed pattern noise. The second area is integrating a digital signal processor (color processing and compression) and a radio frequency (RF) interface for wireless systems. This means developing a prototype wireless image sensor system capable of transmitting image data. 118 R eproduced with perm ission o f the copyright owner. Further reproduction prohibited without perm ission. The third area is to eliminate the need for a battery power supply. In long-lived sensor embedded systems where battery replacement is difficult, generating power from ambient sources becomes imperative. A circuit powered by ambient sources has a potentially infinite lifetime, as long as the source persists. The fourth area is exploring a wireless micro-sensor array network system. This system consist of battery-operated sensors that work together to achieve a desired goal. This system enable the reliable monitoring of a variety of environments for applications such as home security, medical monitoring, and a variety of military applications. The fifth area is to consider a distributed vision. Because of its small size, low power, and low cost, our image sensor uniquely lends itself to applications in which large numbers of sensor arrays are used to form various kinds of camera arrays akin to the eyes of insects, which fields of view capable, in principle, of extending to 4u steradians. Such array has many potential applications for surveillance, autonomous navigation, and in-situ inspection. The final area is investigating on-chip testability issues for image sensors as build-in self-test (BIST). Test time and cost reduction is one of the most important things in a high-volume image sensor manufacture. This high cost of the image sensor test can be reduced significantly by making an on-chip tester. 119 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. References [1] K. Aizawa et al., “On sensor video compression,” IEEE Workshop on CCDs and Advanced Image Sensors, Dana Point, CA, Apr. 1995. [2] C. Anagnostopoulos et al., “An integrated CMOS/CCD sensor for camera autofocus,” Electronic Imaging '88: International Electronic Imaging Exposition and Conference. Advance Printing o f Paper Summaries. Inst. Graphic Commun, Waltham, MA, USA; 2 vol. xxxviii+1272, pp.159-63, 1988. [3] H. Ando et al., “Design consideration and performance of a new MOS imaging device,” IEEE Trans. Electron Devices, vol. ED-32(5), pp. 1484-1489, 1985. [4] M. Aoki et al., “A 2/3-inch format MOS single-chip color imager,” IEEE Trans. Electron Devices, vol. ED-29(4), pp. 745-750, 1982. [5] H. Banba et al., “A CMOS bandgap reference circuit with sub-l-V operation,” IEEE J. Solid-State Circuit, vol. 34, pp. 670-674, May 1999. [6] C. H. V. Berkel et al., “Scanning the technology,” Proc. IEEE, vol. 87, pp. 223- 233, Feb. 1999. [7] W. S. Boyle and G.E. Smith, “Charge coupled semiconductor devices,” Bell Syst. Tech. J., vol. 49, pp. 587-593, 1970. [8] V. Brajovic and T. Kanade, “New massively parallel technique for global operations in embedded imagers,” IEEE Workshop on CCDs and Advanced Image Sensors, Dana Point, CA, Apr. 1995. 120 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [9] S.G. Chamberlain, “Photosensitivity and scanning of silicon image detector arrays,” IEEE J. Solid-State Circuits, vol. SC-4(6), pp. 333-342, 1969. [10] A. P. Chandrakasan et al., “Minimizing power consumption in digital CMOS circuits,’ Proc. IEEE, vol. 83, pp. 498-523, Apr. 1995. [11] T. B. Cho et al., “A 10 b, 20 Msample/s, 35 mW pipeline A/D converter,” IEEE J. Solid-State Circuits, vol. 30, pp. 166-172, Mar. 1995. [12] K. B. Cho et al., “A 1.2V micropower CMOS active pixel image sensor for portable applications”, ISSCC Digest of Tech. Papers, pp. 114-115, Feb. 2000. [13] R. Dawson et al, “A CMOS/buried-n-channel CCD compatible process for analog signal processing applications ,” RCA -Review, vol.38(3), pp. 406-435, 1977. [14] S. Decker et al., “A 3.7x3.7mm square pixel CMOS image sensor for digital Still camera application”, ISSCC Digest of Tech. Papers, pp. 182-183, Feb. 1998. [15] A. Dickinson et al., “A 256x256 CMOS active pixel image sensor with motion detection”, International Solid State Circuits Conference Dig. of Tech. Papers, pp. 226-227, 1995. [16] B. Dierckx, “XYW detector: a smart 2-dimensional particle sensor, ” Nuclear Instruments and Methods in Physics Research, vol. A275, p. 527, 1989. [17] J. F. Duque-Carrillo et al., “1-V Rail-to-rail operational amplifiers in standard CMOS technology,” IEEE Journal of Solid-State Circuits, vol. 35, pp. 33-44, Jan. 2000. [18] R. Dyck and G. Weckler, “Integrated arrays of silicon photodetectors for image sensing, " IEEE Trans. Electron Devices, vol. ED-15(4), pp. 196-201, 1968. 121 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [19] S. Espejo et al., “Smart-pixel cellullar neural networks in analog current-mode CMOS technology,” IEEE J. Solid-State Circuits, vol. 29, pp. 895-905, Aug. 1994. [20] E.R. Fossum, “Active pixel sensors — are CCDs dinosaurs?,” Charge-Coupled Devices and Optical Sensors III, Proc. SPIE, vol. 1900, pp. 2-14,1993. [21] E. R. Fossum, “CMOS image sensors: electronic camera on a chip”, IEDM Tech. Dig, pp. 17-25, December 1995. [22] E. R. Fossum et al., “A 37x28mm2 600k-Pixel CMOS APS dental X-Ray camera-on-a-chip with self-triggered readout”, ISSCC Digest of Tech. Papers, pp. 172-173, Feb. 1998. [23] B. Fowler, A. Gamal, and D. Yang, “A CMOS area image sensor with pixel-level AID conversion,” International Solid State Circuits Conference Dig. of Tech. Papers, pp. 226-227, 1994. [24] J. Franca, Y. Tsividis, Design o f Analog-Digital VLSI Circuits for Telecommunications and Signal Processing, Prentice Hall, 1993. [25] P. Fry, P. Noble, and R. Rycroft, “Fixed pattern noise in photomatrices,” IEEE J. Solid-State Circuits, vol. SC-5(5), pp. 250-254, 1970. [26] L. L. Fujimori et al., “A 256 x 256 CMOS differential passive pixel imager with FPN reduction techniques,” ISSCC Digest o f Tech. Papers, pp. 106-107, Feb. 2000. [27] J. Henkel, “A Low-power hardware/software partitioning approach for core-based embedded systems,” Design Automation Conference, pp. 122-127, Jun. 1999. [28] Infrared Readout Electronics, Proc. SPIE, vol. 1684, 1992. [29] A. K. Jain, Fundamental o f Digital Image Processing, Prentice Hall, 1989. 122 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [30] W.C. Jakes, Microwave Mobile Communications, IEEE Press, 1994. [31] S. Kawahito et al., “A compressed digital output CMOS image sensor with analog 2-D DCT processors and ADC quantizer,” International Solid State Circuits Conference Dig. of Tech. Papers, pp. 184-185, 1997. [32] S. E. Kemeny et al., “CMOS active pixel sensor array with programmable multiresolution readout,” IEEE Workshop on CCDs and Advanced Image Sensors, Dana Point, CA, Apr. 1995. [33] T. Kinugasa et al., “An electronic variable shutter system in video camera use,” IEEE Trans. Consumer Electronics, vol. CE-33, pp. 249-255, 1987. [34] S. Kleinfelder et al., “A lOkframe/s 0.18pm CMOS digital pixel sensor with pixel-level memory”, ISSCC Digest of Tech. Papers, pp. 88-89, Feb. 2001. [35] A. Krymski et al., “A high speed, 500 Frames/s, 1024x1024 CMOS active pixel sensor”, Symposium on VLSI Circuits, pp. 137-138, 1999. [36] K. E. Kuijk, “A precision reference voltage source,” IEEE J. Solid-State Circuits, vol. SC-8, pp. 222-226, June 1973. [37] E. Lauwers et al., “ Power estimator for high-speed ADCs,” Design, Automation and Test (DATE), Mar. 1999. [38] M. Loinaz et al., “A 200mW 3.3V CMOS color camera IC producing 352x288 24b video at 30ffames/s”, ISSCC Digest o f Tech. Papers, pp. 168-169, Feb. 1998. [39] F. Maloberti et al., “Design consideration on low-power data converters”, IEEE Trans. Circuits and System-I, pp. 853-863, Nov. 1995. 123 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [40] W. Mandl, J. Kennedy, and M. Chu, “MOSAD IR focal plane per pixel A/D development,” Infrared Readout Electronics III, Proc. SPIE, vol. 2745, pp. 90-98, 1996. [41] B.Mansoorian et al., “A 250 mW, 60 f/s 1280x720 pixel 9b CMOS digital image sensor”, ISSCC Digest o f Tech. Papers, pp. 310-311, Feb. 1999. [42] R. Melen, “The tradeoff in monolithic image sensors: MOS vs. CCD,” Electronics, vol. 46, pp. 106-111, May 1973. [43] S. Mendis, S. Kemeny and E.R. Fossum, “A 128x128 CMOS active pixel image sensor for highly integrated imaging systems,” IEEE International Electron Devices Meeting Tech. Dig., pp. 583-586, 1993. [44] S. Mendis, S. E. Kemeny and E. R. Fossum, “CMOS active pixel image sensor,” IEEE Trans. Electron Devices, vol. 41(3), pp. 452-453, 1994. [45] S. Mendis et al., “Progress in CMOS active pixel image sensors,” Charge- Coupled Devices and Solid State Optical Sensors IV, Proc. SPIE, vol. 2172, pp. 19-29, 1994. [46] S. K. Mendis et al., “CMOS active pixel image sensors for highly integrated imaging systems,” IEEE J. Solid-State Circuits, vol. 32(2), pp. 187-197, 1997. [47] S. Mutoh et al., “1-V Power supply high-speed digital circuit technology with multithreshold voltage CMOS,” IEEE Journal o f Solid-State Circuits, Aug. 1995. [48] S. Mutoh et al., “A 1-V Multi-threshold voltage CMOS DSP with an efficient power management technique for mobile phone application,” ISSCC Digest o f Tech. Papers, Feb. 1996. 124 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [49] R. H. Nixon et al., “128 x 128 CMOS photodiode-type active pixel sensor with on-chip timing, control and signal chain electronics,” Charge-Coupled Devices and Solid-State Optical Sensors V , Proc. SPIE, vol. 2415, pp. 117-123, 1995. [50] R. H. Nixon et al., “256x256 CMOS active pixel sensor camera-on-a-chip”, ISSCC Digest of Tech. Papers, pp. 178-179, Feb. 1996. [51] P. Noble, “Self-scanned silicon image detector arrays,” IEEE Trans. Electron Devices, vol. ED-15(4), pp. 202-209, 1968. [52] E. Oba et al., “A % inch 330k square pixel progressive scan CMOS active pixel image sensor”, ISSCC Digest o f Tech. Papers, pp. 180-181, Feb. 1997. [53] S. Ohba et al., “MOS area sensor: part II - low noise MOS area sensor with anti blooming photodiodes,” IEEE Trans. Electron Devices, vol. ED-27(8), pp. 1682-1687, 1980. [54] D. Ong, “An all-implanted CCD/CMOS process ,” IEEE Trans. Electron Devices, vol.ED-28(l), pp. 6-12, 1981. [55] B. Pain, E. R. Fossum, “Approaches and analysis for on-focal-plane analog-to- digital conversion,” Infrared Readout Electronics II,_Proc. o f SPIE, vol. 2226, pp. 208-218,1994. [56] S. Pei et al., “Limited color display for compressed image and video,” IEEE Trans. Circuits and Systems for Video Technology, vol. 10, pp. 913-922, Sep. 2000. [57] U. Ramacher et al., “Single-chip video camera with multiple integrated functions”, ISSCC Digest of Tech. Papers, pp. 306-307, Feb. 1999. 125 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [58] S. Ramprasad et al., “Signal coding for low power: fundamental limits and practical realizations,” IEEE Trans. Circuits Syst. II, vol46, pp. 923-929, Jul. 1999. [59] Semiconductors Industry Association, National Technology Roadmap for Semiconductors, 1999. [60] K. Senda et al., “Analysis of charge-priming transfer efficiency in CPD image sensors,” IEEE Trans. Electron Devices, vol. ED-31 (9), pp. 1324-1328,1984. [61] D. Senderowicz et al., “Low-voltage double-sampled EA converters applications,” ISSCC Digest of Tech. Papers, pp. 210-211, Feb. 1997. [62] S. Smith et al., “A single-chip 306x244-pixel CMOS NTSC video camera”, ISSCC Digest of Tech. Papers, pp. 170-171, Feb. 1998. [63] M. R. Stan et al., “Bus-invert coding for low-power I/O,” IEEE Trans. VLSI syst., vol. 3, pp. 49-58, Mar. 1995. [64] F. G. Stremler, Introduction to Communication Systems, Addison Wesley, 1990. [65] T. Sugiki et al., “A 60mW 10b CMOS image sensor with column-to-column FPN reduction”, ISSCC Digest of Tech. Papers, pp. 108-109, Feb. 2000. [66] E. A. Vittoz, “Low-power design: way to approach the limits”, ISSCC, Feb. 1994. [67] E. A. Vittoz, “Low-power low-voltage limitations and prospects in analog design”, Advanced in Analog Circuit Design Workshop, Mar. 1994. [68] G. P. Weckler, “Operation of p-n junction photodetectors in a photon flux integration mode,” IEEE J. Solid-State Circuits, vol. SC-2, pp. 65-73, 1967. [69] R. J. Widlar, “New Developments in IC Voltage Regulators,” IEEE J. Solid-State Circuits, vol. SC-6, pp. 2-7, Feb. 1971. 126 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. [70] H.-S. Wong, “Technology and device scaling considerations for CMOS imagers,” IEEE Trans, on Electron Devices, vol. 43, pp. 2131-2142, Dec. 1996. [71] O. Yadid-Pecht et a l, “Optimization of active pixel sensor noise and responsivity for scientific applications,” Solid-State Sensor Arrays: Development and Applications, Proc. SPIE, vol. 3019, 1997. [72] W. Yang et al., “An integrated 800x600 CMOS imaging system”, ISSCC Digest of Tech. Papers, pp. 304-305, Feb. 1999. [73] I. A. Young et al., “A 0.35mm CMOS 3-880Mhz PLL N/2 clock multiplier and distribution network with low jitter for microprocessors,” ISSCC, pp. 336-337, Feb. 1997. [74] Z. Zhou et al., “A CMOS active pixel sensor with amplification and reduced fixed pattern noise,” IEEE Workshop on CCDs and Advanced Image Sensors, Dana Point, CA, Apr. 1995. [75] Z. Zhou, B. Pain, E.R. Fossum, “CMOS active pixel sensor with on-chip successive approximation analog-to-digital converter”, IEEE Transactions on Electron Devices, pp. 1759-1763, October 1997 [76] Z. Zhou, B. Pain, E. Fossum, “A CMOS imager with on-chip variable resolution for light-adaptive imaging”, ISSCC Digest o f Tech. Papers, pp. 174-175, Feb. 1998. 127 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Appendix This appendix shows, first, system integration with data communication, phase-locked loop, optic consideration, and user interface. Second, pixel characterization and noise sources in CMOS image sensors are discussed. Finally alternative power supply sources are explored. A. 1 System Integration All functionality necessary to increase system reliability in micropower image sensor-based ultra-low-voltage applications for data communication is investigated. A.1.1 Data Communication In a data communication system as shown in Figure A .l, before we send image data signal, however, it may be put into several different forms in the transmitter. Choice of these different ways of conveying the binary code information will depend on the type of modulation and demodulation employed and other constraints on bandwidth, receiver complexity, etc. The non-retum-to-zero (NRZ) representations are efficient in terms of the bandwidth required and are widely used. In a receiver side, we need to extract the data clock from the receiving image data by using a phase-locked loop. 128 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Channel Receiver Transmitter D ata Driver Amp D ata Clock Clock Shift Register Shift Register PLL Figure A. 1: Data communication. A. 1.2 Phase-Locked Loop (PLL) A fully integrated phase-locked loop (PLL) has been designed for data communication. Several design techniques to improve the performance of a phase- locked loop are considered. The voltage-controlled oscillator (VCO) is formed by a ring of single-ended current-steering amplifier cells for low-noise and low-power \ To achieve smooth frequency transitions, a pulse width limiting circuit is used to control the pulse width of the phase/frequency detector (PFD) output. The charge 1 H. C. Yang, “A Low jitter 0.3-165 MHz CMOS PLL frequency synthesizer for 3 V/5 V operation,” IEEE Journal o f Solid-State Circuits, Vol. 32, No. 4, Apr. 1997. 129 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. pump shown in Figure A.2 is designed using cascoded current sources with CMOS switches which controlled by bias currents. Since the output frequency is the VCO frequency divided by two, the PLL output with 50% duty cycle is guaranteed. Fin o Fout PFD Divide VCO Loop Filter Charge Pump Divide Figure A.2: Phase lock loop (PLL). Figure A.3 shows the layout of PLL chip. The measured results show that the PLL operates from 0.25 MHz to 40 MHz at 3.3 V power supply. 130 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. .' I ijPPPPjjjjjtjjg ils s l S r l r S i W ^ W W f: Figure A.3: The physical layout of PLL. R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. A. 1.3 Optic Considerations To understand the whole imaging system, it is necessary to understand both the optics (i.e., a lens) is its focal length, f In the simplest terms, a lens’ focal length is the distance from the center of the lens that the lens will focus to a spot light coming in from infinity. Second to the lens’ focal length is the diameter of its aperture, A. A common parameter defining a lens’ performance is its f-number, f/#. The f/# of a lens is given approximately as The f/# determines the theoretical limit of how much light gets to the image plane (i.e., the image sensor) and how sharply focused that light can become. In terms of the f/#, the lens’ resolution (minimum theoretical spot size diameter, a) can be given approximately (for visible light) by the equation for the Airy disk as, As well, the diagonal full field of view captured by a lens/sensor combination is given approximately as, image sensor and the imaging optics 2. One fundamental parameter of the imaging Lens’ f-number (A.1) a (i-3 sir/#) Minimum Spot Diameter (A.2) Diagonal Full Field of View (A.3) 2 S. Campbell, “Optics primer for digital image sensors,” http ://www.photobit.com/Technology AVhite_Papers/white_papers.htm 132 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. where tan _ 1 is the trig function arctangent (and d is the image sensor’s diagonal dimension). In comparing different imaging systems, it is important to be sure parameters such as field of view do not change. A.l.3.1 Choosing the Right Lens An imaging lens is needed to provide the sensor with an accurate representation of the object to be captured. As in conventional photography, but with the sensor replacing film, the lens fits between the sensor and the object. Light from the object passes through the lens, and the lens forms an image of the object where the sensor is located. To match the sensor’s image-detecting ability to the lens’ image-forming ability, the size, number, and distribution of the sensor’s pixels must be compared to similar quantities in the lens’ image. In determining when such a match is optimal, two parameters must be considered: the size of the sensor’s pixels (“resolution”) and the overall size of the image sensing array (“format”). If a particular image sensor contains pixels that are, for example, 5 microns wide, then the proper lens to use with that sensor should be able to resolve 5-micron- wide features in the images it forms. If the lens used cannot resolve image features as small as 5 microns, then the images resulting from that particular lens/sensor combination will appear blurry. On the other hand, if the lens used resolves image features that are equal to, or smaller than, 5 microns wide, then the resulting images 133 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. will be sharp. This principle can be taken too far, however, when the lens used can resolve image features that are much smaller than the sensor’s pixel size. If a particular sensor array has a 1/4-inch optical format (corresponding to a diagonal of approximately 4 mm), for example, then the proper lens to use with it will be one that can form images at least as large as a 1/4-inch format (but not much larger). Use of a lens having this ability will produce images that are filled out to the comers, while use of a lens that cannot form images of sufficient size will result in images with the comers cut off. Beyond resolution and format, there are other parameters to consider when choosing optics for electronic imaging. “Distortion,” for one, is a measure of the degree to which lines that should be straight appear bent or curved in the image formed by a lens. The parameter of “relative illumination” describes the brightness of the comers of a picture, relative to the brightness of its center. The “f-number,” as noted, describes how much light gets through a lens to form an image on the sensor. And the parameter of “field of view” describes how wide of an image, in degrees, a particular lens/sensor combination will capture. These parameters are often interdependent. For a full discussion of issues related to lens selection, please see the white paper “ Optics Primer for Digital Image Sensors” from http://www.photobit.com /Technology/White_Papers/white_papers.htm. Also a Microsoft® Excel spreadsheet (“Optics Calculator”) is available. Figure A.4 shows the optics calculator for the 176 x 144 5 pm pixel pitch micropower sensor. 134 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. IN PU T N u m b e rs in GRAY B o x e s ONLY IN PU TS A R E BLU E | O U T P U T S A R E R ED - i • 1 INPUT v T v OUTPUT A n sw ers A pply F o r T h e se P ro v id e d INPUTS P ./.f P.z.H .S p.z.e H ,e,f S.Q.f P .H .S .f ?,H .S,f p = Ti ^ s e V < M an.' W ia? s - s theSw.sot? 1/6 1/L 1?S «17 . ■ s i < bJ’ ti !3 i^ VZ tho 1 rw'i? it'T K ..r o r:s > 5.0 fj.ll ! 3.7 -j.fi !' i L _f * < Kr-v B tc , is Object? :. O.fO M .20 0.16 0.38 0.70 H ' i m 1 < H u V . at IS the yiwtertj 5 103 a.7£ 1,38 to n 1.30 S <;ne-efS! .*5. J < w It Fuv irtegreM) G > 18 11 18 1b \ c > n 11 3 f - 2-8 1 < 'tnm ) f n 1.H 4 4 3.3 2.8 2J» ?.,n ?.S 1 fmsti) T ~ ~ ~ r - , h > 0.88 8.88 n.its 11.74 8.71 u -a 0,56 It (-.«*) • • • , , . • , i t.w 1-1-9 . t.ftf 8.93 8 73 ■ B.fli » m m r - a a? n.v.t c o r fi.Si fl l.T i P.-04 3.8-1 • ! .1 .:. w ... liM i * tC 1 1 d 1.74 1.7T • 1.BS I.SIS 0.78 u.us * . ; .• f r - fi.«8 fi.m 8.07 11.0/ 0.05 0.00 I i . H f i * m ! ! i i i > INPUT K now n V a lu e s T h e re ; i : i i i i | > iP a ra m e te r Row s > iP a ra m e te r R ow s > F o r S ta n d a rd 4:3 A sp e ct Ratio » F o r S p e c ia l 1:1 A sp e ct R atio : : : ! o a u e n o N S i > ] ■ i > ■ i 1 2 3 4 INPUT K now n P a ra m e te rs in th e GRAY B o x e s (Row"B" - R ow "13“. C o lu m n X ”! ONLY E ach R ow U niquely C o n ta in s D ata fo r O ne P a ra m e te r (As D efined in C o lu m n X ") i C o rrelate th e INPUTS P ro v id ed in iR"8" R"1.V, C 'C 's w ith th e C h o ic es in (R '7 “. C,“G “ - CM"); E x tract OUTPUTS from th e A p p ro p riately C o rre la te d C olum ns ( C O ” V M 1 5 E x tra c te d OUTPUTS M ay B e F ed -B ack Into th e INPUT B o x es fo r A dd itio n al OUTPUT C alcu la tio n s Figure A.4: Optics calculator for the 176 x 144 5 pm pixel pitch micropower sensor. A. 1.4 User Interface A personal computer (PC) user interface is designed by Dephi software. Window 95 or 98 is needed as an operating system. The sensor output will go through the ECP parallel port. Figure A.5 shows the PC user interface. Press first Load DSP button and then Live button. A raw B&W image must appear on the screen. To get the better image, a dark frame can be stored and subtracted. Close completely lens diaphragm and a cover. Use the indicator “Average” to control if there is a light leak. “Average” shows average signal of unprocessed raw data over frame. Press Save Dark button and wait for awhile. Click “OK” when the acquisition is done. An average over several frames will be stored on the drive. Open 135 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. diaphragm now and click Sub Dark button. All non-uniformity will be disappeared. It is strongly recommended that all operations with the board are done with disabled Live button. To store one frame of data, use Get and Save BMP/Save Text buttons. Live mode must be disabled. Get one frame Live display Zoom Control M U I m ........— z Senal Dafa I Header 1 Z 3 4 5 6 7 6 a J _d 2d _d . d , d . d „ d |G 5“ (so"JT5o'|o jo jo (o T “ jo " jOL Load DSP save PMF I J Save ! e :i; Sub Dart J Save Daik J oei.iii Vref [S T P H T x l Statist)-.a: SD: (78 553 A v e r a g e : 150.33 RMS: 78.553 Diff. RMS: ji s 980 Laic 32 h a rm s J Ml SI): PRNU: External Bias and Reference Voltage Statistics SD: Standard Deviation Average: Average pixel value RMS: Root Mean Square Diff. RMS: Differential Root Mean Square MTSD: Mean Temporal Standard Deviation PRNU: Photo Response Non-Uniformity Figure A.5: PC user interface. 136 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. A. 2 Pixel Characterization There are important parameters to characterize pixel which include conversion gain, dark current, and quantum efficiency. A.2.1 Conversion Gain The sensor is operated such that the noise of the sensor is dominated by input photon shot noise. In this case, the noise measured in electrons is equal to the square root of the total number of photo-generated electrons. Several different exposures are used to generate a data set of read noise vs. mean pixel output voltage. If the r.m.s. noise voltage is < r v , then the r.m.s. number of electrons is If we define N as the number of photons collected and Mp as the mean pixel output voltage, then we have v (A.4) <7 e G. conv = Gc 0 „ vJ n = Gt conv conv Thus conv (A. 6) 137 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. A.2.2 Dark Current The dark current of an image sensor is very important characteristic. Dark signal is a termed used to refer to the background signal present in the image sensor readout when no light is incident upon the image sensor. This background signal is a result of thermally emitted charge being collected in the photodiodes. The magnitude of the dark signal is dependent on the CMOS process, image sensor architecture, mode of operation, and on the image sensor operating temperature. Due to the present of localized defects in the silicon substrate, the dark signal collected in each pixel will vary from pixel to pixel. This variation in dark signal is called the dark signal fixed pattern noise. The average current associated with the readout of a complete dark image is referred to as the dark current. The dark current will double for approximately every 6-9 °C increase in image sensor temperature. To measure dark current, multiple dark frames are taken under different integration times in the absence of illumination. The mean value of all pixel, M(t), is then calculated for each frame. The rate of increase of the mean value as a function of integration time is the dark signal, V d a rk , v** = (A.7) at The dark signal is measured in units of (output-referred) mV/sec, it can be converted to dark current, J d a rk , as follows: Jd a rdAlcm2]= (A.8) c.onv pix where A p ix is the individual pixel area and G conv is the pixel conversion gain. 138 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Lowering the dark current will improve the dynamic range due to a reduction of the shot noise on the dark current. Furthermore, dark current reduction is correlated with a decrease of the fixed pattern noise and a reduction of the amount of white pixels in dark. A.2.3 Quantum Efficiency To measure quantum efficiency (QE) the sensor must be illuminated with a known amount of narrow band light. If the light falling on the sensor is known to be (p(X) photons/sec/cm2 (ascertained using a calibrated photodetector), then the quantum efficiency is equated as (p{X)TGc o n v Ap ix In practice, the QE is averaged over a small neighborhood of pixels. Note that the formula used to calculate the QE applies to the cross-sectional area of an entire pixel. Dividing this area by the pixel fill factor can yield larger numbers but is not generally performed since more than one actively defined region is photoresponsive. A.3 Noise Considerations A.3.1 Noise Sources of CMOS APS Any image sensor not only produces output signal, but generates different kind of noises at same time. There are two main groups of noise: fixed pattern noise (FPN) and temporal noise. FPN refers to the pixel-to-pixel variations in the output signal, 139 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. has non-temporal spatial nature and is due to variations in individual pixel parameters arising in the optical path before photoelectric conversion and the electrical path after the conversion has taken place. FPN can be specified either as a peak-to-peak value or a r.m.s. value referenced to a signal mean. Originally, APS had a very serious problem with two-dimension FPN due to threshold voltage variance of MOS transistors. In individual pixels, these variations accounted for 20-30 mV r.m.s. non-uniformity. Also, if no special correction was applied, the FPN in the column read-off circuitry reached 3-5 mV r.m.s. double delta sampling (DDS) practically eliminated both of these problems. It is very convenient to divide FPN into two principally different components based on signal dependence. The dark signal non-uniformity (DSNU) is the signal independent FPN. This two-dimension FPN is the pixel-to-pixel dark current variations. For example, if an average dark signal is 3,000 e/s, in APS with 30 fps it will account for 100 electrons per frame and, probably, FPN equal to 40 e r.m.s. or 1.6 mV r.m.s. for G = 40 pV/e. Additionally, this non-uniformity depends on the sensor temperature because the dark current approximately doubles every 6 to 9 °C. Photo Response Non Uniformity (PRNU) is the signal dependent component of FPN. Local variations in different layer thickness, doping impurities or in the individual pixel geometry cause modifications in QE, pixel capacitance or source follower gain across the photosensitive area. PRNU can depend on light wavelength, pixel design, voltage mode or timing diagram. 140 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. In general, if ADC has enough accuracy, FPN can be removed completely. DSNU is relatively easier to correct than PRNU. Dark frame or row subtraction will reduce the two-dimensional or just vertical component of FPN respectively. Temporal noise refers to the time-dependent fluctuations in the signal level that are fundamentally different from the first group, fixed pattern noise. In a properly designed image sensor the main contribution to the temporal noise is pixel noise. Noise in the pixel includes the three main contributions: photon noise, dark current shot noise and reset noise. Because photon detection is essentially a random process obeying Poisson statistics, the standard deviation (or noise) associated with detecting a mean of N photons is a square root of N. This represents a fundamental limit on the sensor dynamic range and can only be improved by increasing the full capacity of the pixel's well. For instance, integration of 40 thousand photons will produce photon noise equal to 200 electrons, r.m.s. and the signal-to-noise ratio (SNR) would be 200. The photon noise limits the SNR when detected signals are large. Dark current shot noise has almost same nature as photon noise but in this case N is the number of electrons generated by the photodiode dark leakage current during integration time. For instance, for 100 dark current electrons per frame the shot noise would be 10 e r.m.s. The third main contribution to the temporal pixel noise is the reset or kTC noise caused by uncertainty of the voltage on the floating diffusion (FD) capacitance ( C f d ) after reset operation in the pixel. This uncertainty can be quantified as charge and is equal to the square root of kTCFD for standard “hard" reset, which is common to 141 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. CCD (k here represents Boltzmann’s constant and T is the Kelvin temperature). In APS we used so called “soft" reset, when voltages on the drain and gate of the reset transistor in the pixel have same value during reset time. In this case the reset transistor operates in the incomplete charge transfer mode. Because the discharge process becomes emission limited the uncertainty of reset level on FD drops to square root of i^kTCFD (Thomber,1974). For example, with conversion gain G = 40 pV/e corresponding to C = 4 fF, the reset noise would be 18 e or 0.72 mV. There is a unique opportunity to eliminate temporal reset noise from image sensors (White, 1974). In correlated double sampling (CDS) the FD pixel voltage is read off twice, just after reset and after signal charge have arrived. The difference between these two samples is the useful signal with the reset noise zeroed out. CDS also reduces MOS transistor flicker (1/f) noise. A.3.2 SNR and Dynamic Range One of the most useful techniques available for calibration, characterization and evaluation of image sensor parameters in absolute terms is the noise measurement and interpretation. Only with noise measurement we can determine the charge conversion gain in CMOS APS. For that we need to plot temporal noise voltage square as a function of output signal voltage and calculate the slope in pV/e. 142 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. 1 0 0 Dark Signa (DSNU) L Non Uniformity mited, slope=0 Photo Respons (PRNU) Limitt ; Non Uniformit; d, / s!ope=l ^ ............ ........ / p RNU=2% F ixed' DSNU=2mV.rms 3attem Temp tral Noise S , 0 0 # ^ Pixel Noise =0.5mV,rms D h - W ''1 el Noise Limited slope=0 Photon ?foise Limited - 9 > -% . w Saturation SIGNAL 1 10 100 1000 mV 25 250 2,500 25,000 Electrons (G=40pV/e) Figure A.6: Temporal noise and fixed pattern noise vs. signal The accompanying figure of a photon transfer curve (Figure A.6) plots noise as a function of input signal. In this plot, we display two different curves: Fixed Pattern Noise and Temporal Noise. For each curve, based on signal dependence we identify two distinct noise regimes. The first one, the noise floor (slope = 0), represents noise measured under totally dark conditions. It is DSNU for FPN and pixel temporal noise. At signal levels along the zero-slope portions of both curves the noise dominates over 143 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. input and the signal cannot be detected. As we mentioned above, DSNU could be determined and subtracted. As illumination of the imager increases, the noise becomes dominated by the PRNU and photon noise, which are correlated with the signal and represent the second noise regime. Since the plot is in log coordinates, a line of slope V i characterizes photon noise. This is because the uncertainty in quantity of charge collected in any given pixel is proportional to the square root of the number of incident photons and is governed by Poisson’s statistics. PRNU associated with fixed pattern noise is visible at higher light levels. This noise is proportional to signal and consequently produces a characteristic slope equal to 1 in the plot. Almost perfect elimination of this noise can be achieved with gain and offset pixel correction. Full well saturation is observed on the temporal noise curve as a break in the slope of 1. At this point, the charge spreads between pixels, causing the noise value to intense decrease. The non-linearity of the signal transfer curve before saturation also reduces the temporal noise value. On the opposite, in the region close to saturation FPN could increase significantly due to difference in source follower non-linearity. The dynamic range of an imager is defined as the ratio of the largest signal that could be handled (linearly) to the smallest simultaneously detectable signal (usually, it is the temporal noise floor in the dark). Figure A.7 shows the relationship between signal level and SNR of a typical CMOS APS, where the solid line represents the overall SNR. The dashed line with 20 dB/decade slope is the pixel noise-limited SNR and the dashed line with 10 dB/decade slope is the photon noise limited SNR. If FPN 144 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. is corrected, only the temporal noise can limit SNR. For a constant illumination SNR is proportional to QE for low light intensities and square root of QE for high light when photon noise dominates. 60dB SNR 40dB Pixel N < rise slope=20dB/ decad Photon Noise si )pe=10dB/dec ad 20dB SIGNAL OdB 1 10 100 1000 mV 25 250 2,500 25,000 Electrons (G=40nV/e) Figure A.7: Signal to Temporal Noise Ratio vs. Signal The discrimination threshold of human vision is approximately a constant ratio of luminance over a range of intensities of about 300:1. The perceptual response to intensity is almost logarithmic. Log of contrast sensitivity in this range is around - 1.7. It means that the human visual system cannot recognize difference in intensity 145 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. less than 2% of average luminance level. Considering this difference as peak-to-peak range, we can calculate the rms value of acceptable noise as 0.65% with accuracy almost 90%. Therefore, a high quality camera needs the signal-to-noise ratio at least 150 or 44dB. A.3.3 Power Supply Noise Margin Power supply noise margin (PSNM) is a parameter closely related to the power supply voltage characteristics. This parameter allows us to determine the allowable noise voltage on the power supply so that the operation of the analog circuit will not be affected. PSNM = Vdd N o is e + GND N o is e ' (A. 10) As shown in Figure A.8, the power supply Vdd and GND has noises, noise free operating voltage V n0ise-free can expressed as K o i s e - f r e e = Vdd{PowerSupply) - V p o w e r _su p p ly _n o is e . (A.11) Typically, as lab experiments show, power supply ripple can measure as much as 200 mV. Similarly, ground bounce can occur as much as power supply did. 146 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. Vdd 11 Vdd N oise jr__ noise free operating voltage power supply ^ " T g n d GND ± N oise Figure A. 8: Power supply noise margin. A .4 Alternative Power Sources An interesting question arises: can we use ambient energy sources to power electronic systems? A circuit powered by ambient sources has a potentially infinite lifetime, as long as the source persists. In long-lived sensor embedded systems where 147 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. battery replacement is difficult, generating power from ambient sources becomes imperative. Various schemes have been proposed to eliminate the need for batteries in a portable digital system. Basically, energy that exists in the environment of the device is converted by a transducer into electrical form which can be used by a circuit to perform useful work. The sources of ambient energy available to the system depend on the application. The most familiar source is solar power, often used in commercial electronic calculators. Other examples include other types of electromagnetic fields, thermal gradients, fluid flow, and mechanical vibration. Other proposals include powering electronic devices through harnessing energy produced by the human body or the action of gravitational fields. Table A.l: Examples of ambient energy sources. Energy Source Transducer Power Walking (Direct Conversion) Piezoelectric 5 W Solar Photovoltaic Cell 20 mW Magnetic Field Coil 1.5 mW Walking (Vibration) Discrete Moving Coil 400 pW High Frequency Vibration MEMS Moving Coil 100 pW RF Field Antenna 5 pW Table A .l lists potential power output for a wide variety of energy sources 3. Stamer models the power available from directly converting the energy of footsteps by 3 A. Chandrakasan et al., “Design considerations for distributed microsensor systems,” IEEE CICC, pp.279-286,1999. 148 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission. inserting a piezoelectric transducer in the heel of a shoe. A direct transduction technique like this has the potential to generate large amounts of power, on the order of 5 W. Photovoltaic cells are the most popular transducer for converting ambient energy. Besides light, other types of electromagnetic fields have been proposed as energy sources. Magnetic fields coupled using an on-chip inductor have been shown to generate 1.5 mW of power. RF field has been demonstrated to generate 5 pW of power. Power generations using mechanical vibration are shown to produce a power output of 400 pW by human walking and 100 pW by micro electro mechanical systems (MEMS) transducer approach. 149 R eproduced with perm ission of the copyright owner. Further reproduction prohibited without perm ission.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Design, fabrication, and integration of a 3-D hybrid electronic/photonic smart camera
PDF
High performance crossbar switch design
PDF
High -speed CMOS continuous -time switched -current sigma -delta modulators
PDF
A CMOS frequency channelized receiver for serial-links
PDF
Cytoarchitecturally conformal multielectrode arrays for neuroscience and neural prosthetic applications
PDF
Design issues and performance of large format scientific CMOS image sensors
PDF
A passive RLC notch filter design using spiral inductors and a broadband amplifier design for RF integrated circuits
PDF
Gyrator-based synthesis of active inductances and their applications in radio -frequency integrated circuits
PDF
A thermal management design for system -on -chip circuits and advanced computer systems
PDF
CMOS gigahertz -band high -Q filters with automatic tuning circuitry for communication applications
PDF
Experimental demonstration of optical router and signal processing functions in dynamically reconfigurable wavelength-division-multiplexed fiber -optic networks
PDF
Design and analysis of ultra-wide bandwidth impulse radio receiver
PDF
Characterization and compensation of polarization mode dispersion and chromatic dispersion slope mismatch for high bit -rate data transmission
PDF
Effects of atmospheric turbulence on large -span aircraft
PDF
An in vitro model of a retinal prosthesis
PDF
All-optical devices based on carrier nonlinearities for optical filtering and spectral equalization
PDF
Advanced hybrid bulk/integrated optical signal processing modules
PDF
Adaptive task allocation and data gathering in dynamic distributed systems
PDF
Dynamics of hysteretic resistive-SQUIDs: Catastrophe and confusion in the quest for the Kelvin
PDF
A pseudo-random sampled circuit and its applications
Asset Metadata
Creator
Cho, Kwang-Bo (author)
Core Title
A 1.2 V micropower CMOS active pixel sensor
School
Graduate School
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
engineering, electronics and electrical,OAI-PMH Harvest
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Choma, John (
committee chair
), Gundersen, Martin A. (
committee member
), Tanguay, Armand R. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-94408
Unique identifier
UC11338659
Identifier
3027704.pdf (filename),usctheses-c16-94408 (legacy record id)
Legacy Identifier
3027704.pdf
Dmrecord
94408
Document Type
Dissertation
Rights
Cho, Kwang-Bo
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
engineering, electronics and electrical