Skip to content

Projects

The CSEAI Faculty and IAB (Industry Advisory Board) work together to identify the center projects. Below are some existing projects from researchers at the center’s institutions.

Baylor University

  • Reliability for Complex Artificial Intelligence Design (Investigator(s): Robert J Marks II and Daniel Diaz): The more complex a system, the greater number of performance contingencies. Contingency growth can be exponential with respect to system complexity. The establishment of guidelines to mitigate the explosion of undesirable contingencies is the goal of this research.
  • Standards for Ethically Aligned Autonomous And Intelligent Systems (Investigator(s): Pablo Rivas, Patricia Shaw, Ken Goodman): Computer algorithms and analytics are increasingly influential in government, business and society. Alongside the benefits, vast ethical issues remain. This project uses standards and frameworks, to help professionals who participate in the development or maintenance of algorithmic systems and those responsible for their deployment or usage to identify and mitigate unintended, unjustified, or inappropriate outcomes of intelligent systems.
  • Assessment of Deep Learning Models For Consumer Safety (Investigator(s): Pablo Rivas, Tomas Cerny, and Korn Sooksatra): Application of generic debiasing variational autoencoders (DB-VAE) DB-VAEs to mitigate bias on datasets. Automatic assessment of machine learning model’s robustness to adversarial attacks for AI safety and reliability.
  • Data Ethics and Eye Diseases (Investigator(s): Greg Hamerly and Pablo Rivas): CRADLE is a software project to detect leukocoria, a key symptom of ocular disease, using machine learning techniques. Training good prediction models requires a lot of data, which can be difficult to gather by hand, especially when interesting events are rare. Data augmentation and synthesis are potential solutions for sparse and rare data. However, there are potential ethical issues related to tracking data provenance and also reliability of prediction results. We aim to address these issues by studying and adapting best practices from biomedical ethics.
  • Approximate-Reasoning Artificial Intelligence Against Biased Training Data (Investigators: Liang Dong): To address AI application issues caused by biased training data, we create a new AI methodology that applies approximate reasoning and retraces learned rules to the particular training data partition.

Rutgers University

  • Intelligible Models With Good Explanations Via Tracing And Feedback (Investigator(s): Jorge Ortiz): While complex models in a highly interpretable domain, such as image understanding, have been studied extensively, we propose to examine model intelligibility in the context of multi-modal sensor fusion-based control systems. These are systems that fuse multiple sensor streams to make control decisions.
  • Privacy Awareness In Sensing And Learning (Investigator(s): Jie Gao): Ubiquitous sensing, wireless communication, and distributed computation have transformed the way we interact with the physical world. While we celebrate the convenience and the improved quality of life having smart devices around us, the data collection practices have moved from remote fields to private living environments, to work spaces and homes, and the data collected also shifted from non-sensitive scientific data to personal, sensitive data that is closely related to the user’s health conditions, emotional states, physical activities, and social ties.
  • Autonomous And Privacy-Aware Data Collection (Investigator(s): Dario Pompili): In this project, we propose to achieve two complementary objectives: (1) develop ethical AI algorithms that direct distributed autonomous vehicles (and also humans if needed) to capture data from a region of interest in a coordinated manner; (2) design ethical algorithms to protect people’s privacy during the data-collection phase.
  • Federated Learning For Nonconvex Problems (Investigator(s): Yuqian Zhang): The increasing availability and diversity of quantitative information, together with advances in the theory and practice of computational science, are allowing once obscure information and insights to emerge, facilitating users with more accurate and personalized product recommendation and decision making.

University of Miami

  • Reliability For Complex Artificial Intelligence Design (Investigator(s): Robert J Marks II and Daniel Diaz): The more complex a system, the greater number of performance contingencies. Contingency growth can be exponential with respect to system complexity. The establishment of guidelines to mitigate the explosion of undesirable contingencies is the goal of this research.
  • Quantifying conservation of information via active information (Investigator(s): Daniel Dıaz and J. Sunil Rao): Develop a new test based on Active Information and implement actinfo as a measure of performance between different learning strategies.
  • Ethics and Predictive Analytics (Investigator(s): Kenneth Goodman): This effort, a joint initiative of the University of Miami’s Institute for Bioethics and Health Policy and Institute for Data Science and Computing, will undertake analyses of these three applications of predictive analytics with special regard to Software engineering, Bias, and Governance.
  • Active information for supervised mode hunting (Investigator(s): Daniel Dıaz and J. Sunil Rao): Develop an active information reduction algorithm for the detection of multiple modes, generalize data reduction with actinfo to mutual actinfo for supervised local mode hunting, and Evaluation of empirical and theoretical optimality properties of uncertainty.
  • Improving the Accuracy of Statistical Projections (Investigator(s): J. Sunil Rao and Daniel Diaz): In many important practical problems, there is interest in the prediction of new data that are outside the range of the training data – the so-called extrapolation prediction or projection problem, which standard prediction methods have a difficult time with.