Skip to content

Autonomous And Privacy-Aware Data Collection

Investigators: Dario Pompili, PhD

Summary

In this project, we propose to achieve two complementary objectives: (1) develop ethical AI algorithms that direct distributed autonomous vehicles (and also humans if needed) to capture data from a region of interest in a coordinated manner; (2) design ethical algorithms to protect people’s privacy during the data-collection phase. There is a need for proper planning and coordination to get full coverage of region of interest (e.g., a disaster scene) with additional requirements such as obtaining more information in critical regions than others and preserving civilian privacy. Without such a planning, the following two problems would arise: (i) Partial coverage. Data collected without planning focuses mostly on the conspicuous areas of the scene without much information available about surrounding areas, which is required to get the overall context and also to not miss important non-conspicuous information; also, if few data collection vehicles are present at the scene there is even a greater need to direct the vehicles efficiently in such a way as to achieve the whole coverage. (ii) Data quality. Without directions, data gathered is uncoordinated, resulting in low quality with more noise, making it difficult to process it, e.g., capturing the same scene by multiple vehicles. Moreover, vehicles with good/high-resolution cameras can be assigned to regions of higher importance and vice-versa. Video data from static CCTV cameras can be fused with vehicle sensor data to enhance the resolution. In addition, humans present at the region (bystanders) and emergency first responders can also participate in this framework alongside autonomous vehicles (e.g., drones) to capture the data using their smartphones. Simple and intuitive instructions such as “Move Left”, “Move Up”, “Zoom in”, “Zoom out” can be displayed on users’ smartphones through a mobile application that can be downloaded freely from the online app store. We believe such a framework where both autonomous vehicles and humans can collaboratively work will greatly enhance the efficiency of the data collection phase.

In order to provide directions to autonomous vehicles (and human bystanders/first responders) and address both the partial-coverage and data-quality problems, i.e., (i) and (ii) above, we propose to design an innovative, domain-specific Multi-Agent Reinforcement Learning (MARL) framework—possibly organized in a hierarchical way to compensate for communication unreliability/temporary disconnectivity due to wireless channel fading/shadow zones. To illustrate our idea, we choose the example of bridge inspection. Other use cases could be assessing the damage caused by large forest fires, earthquakes, tsunami, just to name a few. We decompose the incident zone into multiple manageable subzones with sufficient overlap (required for complete 2D stitching/3D reconstruction). Each subzone could correspond to one face/side or corner of the zone with corners enabling overlap between the faces/sides of the zone. Each subzone is further divided into a rectangular grid. Each grid is cast as a Markov Decision Process (MDP) that is solved using MARL with agents being the Intelligent Physical Systems (IPSs) along with bystanders/first responders that can cooperatively work in the data-gathering process. The actions for each state are those of a standard grid-world MDP: Move {Left, Right, Up, Down} with boundary constraints enforced. Whenever an agent takes an action to move to a different state, it captures the image of the current state before making the physical action. This image determines the reward that the agent receives. Reward can be changed based on the need. Some examples are: difference between images of pre-event and during/post-event (e.g., for hurricanes, earthquakes), images of incomplete regions (if getting full coverage is important) or specific regions (e.g., fire regions in forest/domestic fires).