Modern applications of AI and machine learning in fields such as genomics, neuroscience, healthcare, and social sciences depend on the analysis of vast high-dimensional datasets that often include highly sensitive personal information. As AI systems rely more on data, achieving high predictive accuracy is no longer enough. Machine learning algorithms must also ensure privacy and remain computationally efficient at scale. This project investigates the fundamental trade-offs between accuracy, privacy, and computational efficiency, aiming to establish a mathematically sound foundation for trustworthy, scalable, and privacy-conscious AI systems. The project aims to develop new theories for machine learning algorithms that prioritize differential privacy and computational efficiency in high-dimensional settings. On the privacy front, it seeks to provide precise characterizations of privacy loss for commonly used techniques, such as differentially private principal component analysis. This is intended to enhance existing analyses that tend to be overly conservative, often introducing excessive noise that adversely impacts model utility. On the computational side, the project examines the limitations of efficient algorithms for low-rank matrix estimation and denoising. This includes investigating iterative and low-degree polynomial methods under realistic models of data dependency. The overarching goal is to identify algorithms that optimally balance statistical accuracy, privacy guarantees, and computational scalability. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.