Kai Shu’s NSF CAREER Award Supports Journey to Fair AI in the Real World

Date

Author

By Casey Moffitt
Headshot of Gladwin Development Chair Assistant Professor of Computer Science Kai Shu

Kai Shu, Gladwin Development Chair Assistant Professor of Computer Science at Illinois Institute of Technology, has spent years conducting research to reduce bias in artificial intelligence algorithms—and that work has now earned him a prestigious CAREER Award from the National Science Foundation that will help him continue this research path.

“Receiving the NSF CAREER grant is a testament to years of hard work, dedication, and passion for advancing scientific understanding in my field,” Shu says. “It means that our proposed research directions have been acknowledged by a prestigious institution like the National Science Foundation, which further validates the importance and impact of our work in the scientific community.”

Ensuring fairness in AI algorithms means avoiding amplifying inequalities and battling existing prejudice when the algorithms are adopted in real-world applications such as social media mining and health informatics. Fairness performance can be significantly degraded under distribution shifts such as domain and temporal shifts.

Existing fairness algorithms require direct access to exact demographic attributes, which is difficult due to people’s awareness and the legal regulations on privacy. Research also indicates that addressing fairness may increase privacy leakage risks. In addition, malicious actors can amplify the demographic bias of AI algorithms by injecting “poisoned” samples into the training stage or by manipulating the data in the inference stage.

While being built, AI tools need a lot of training data, which affect how the algorithms predict the future. AI tools can cause severe fairness risks when biased data is directly fed into the training process.

For example, biased output from AI algorithms, such as large language models, can exist and be strengthened by malicious-injected instructions. Shu says one of the project’s goals is to develop more reliable AI models that can deal with such injection attacks to ensure robust fairness deployment.

“It is not feasible to simply ‘reject’ or ‘prevent’ biased data to ensure fairness,” Shu says. “Therefore, we need to develop effective AI algorithms to model the biased data such that we can reduce the unfairness across groups of populations, while maintaining the prediction performance.”

Shu says another goal is to develop a new method that can achieve comparable accuracy between males and females without losing much overall accuracy.

For example, in predicting diabetes with electronic health record data, a machine learning method may achieve an overall 85 percent accuracy. However, it could achieve 70 percent accuracy for male patients and 90 percent accuracy for female patients. 

The ultimate goal of the research is to develop effective solutions that ensure fairness under generalization, privacy, and robustness challenges.

Shu says his CAREER project aims to ensure AI algorithm trustworthiness by enhancing fairness in the results that they produce. This will allow users in minority groups to have similar results compared to majority groups. It also will protect user privacy when achieving fairness prediction so that users’ sensitive information can be safe. The work also will shed light on potential risks that are neglected when using LLMs for real-world applications.

The project’s multi-dimensional challenges include generalization, privacy, and robustness when achieving fairness in the real-world. Existing research mainly focuses on finding a good trade-off between improved fairness metrics and the maintenance of prediction performance. However, it is more challenging to build fairness AI models that can also be applied to different domains, with privacy protected and fortified against potential malicious attacks.

“I am very excited about this project because it will not only advance the fundamentals of trustworthy AI, but also facilitate the fair AI deployment in real-world, high-stakes applications,” Shu says. “AI is undoubtedly the most important tech revolution. However, it becomes particularly important to ensure its trustworthiness and reliability by anticipating the potential risks of security, privacy, and safety.”

Disclaimer: Research reported in this publication is supported by the National Science Foundation under Award Number 2339198. This content is solely the responsibility of the authors and does not necessarily represent the official views of the National Science Foundation.