Recent advances in artificial intelligence (AI) research and technologies have led to widely reported of successes in tasks that were earlier thought too difficult or impossible to accomplish. These include multilingual translation and other natural language processing tasks, as well as smart manufacturing and logistics, and of course autonomous driving. Evidently, AI technologies are now increasingly being applied to quantitative disciplines beyond computer science (CS). Some of these applications, such as autonomous driving, can be classified as mission critical in nature. However, many of the most advanced AI systems rely heavily on statistical machine learning (ML), which perform well in a statistical sense but can be unreliable on an individual basis. Using safe, secure, and reliable (SSR) computing principles in an AI systems context, it is possible to enhance the trustworthiness of AI among users of AI technologies. This paper describes a case study involving the use of specifically developed experiential learning materials in a classroom setting. The main feature of these learning materials is an emphasis on SSR principles that enhance AI trustworthiness. The study initially involved only CS majors, but work is currently underway to expand the study to include non-CS STEM majors and other quantitative disciplines. These include business analytics, statistics, mechanical engineering, civil engineering, and computer engineering. Findings to date, as well as suggestions for use of the materials, are presented. The research is currently being expanded in multiple directions and interested educators and learners are invited to participate in this exciting endeavor.