Krakow, Lucas W., authorChong, Edwin K. P., advisorBurns, Patrick, committee memberPezeshki, Ali, committee memberLuo, Jie, committee member2019-01-072019-01-072018https://hdl.handle.net/10217/193188This paper presents multiple applications of sensor resource management. The general focus entails two chapters on adaptive estimation of time-varying sparse signals and three chapters exploring autonomous control of unmanned aerial vehicles (UAVs) sensor platforms employed for target tracking. All of the included applications are posed as decision control problems formulated in the rigorous framework of a partially observable Markov decision process (POMDP) and solution methods based on Bellman's equation are exercised, generating adaptive control policies for action selections in the given scenarios. Specifically, the rollout optimization method is administered in the cases of signal estimation under the objective of maximizing the information gain about the unknown sparse signal. For the UAV sensor platform control, nominal belief-state optimization (NBO) is employed for control selection for optimizing objectives including target-tracking error, surveillance performance and fuel efficiency. The empirical studies in each investigation present evidence that non-myopic solution methods, accounting for both the immediate and future costs of the current action choices, provide performance gains for these scenarios.born digitaldoctoral dissertationsengCopyright and other restrictions may apply. User is responsible for compliance with all applicable laws. For information about copyright law, please see https://libguides.colostate.edu/copyright.decision controlQ-value approximationsunmanned aerial vehiclespartially observable Markov decision processautonomous controlsparse signal recoverySpanning sensor resource managementText