About this course¶
This course aims to create a beginner-friendly course on statistical computing and artificial intelligence in medicine.
Target audience¶
This course is designed for biomedical researchers, clinicians, trainees, and industry practitioners who want to understand and apply artificial intelligence in their own work.
Whether you are a biologist hoping to analyze complex experimental data, a clinician or clinical researcher interested in AI methods for diagnosis, prognosis, or decision support, a public health or translational scientist working with large-scale health data, or a data scientist or engineer building tools for biomedical and healthcare applications, this course will provide a strong foundation. No matter your starting point, the course is intended to help you connect modern AI methods to real biomedical questions.
We emphasize practical breadth with biomedical relevance. You will gain exposure to the major ideas shaping modern AI—from deep learning and Transformers to fine-tuning and generative modeling—while learning how these methods can be used in areas such as imaging, electronic health records, molecular biology, drug discovery, and scientific communication. The aim is not to make you an expert in every topic in a single semester, but to give you a clear map of the field, a usable computational toolkit, and the confidence to pursue more specialized applications in your own domain.
If you are excited by the possibility of using AI to accelerate discovery, improve patient care, and tackle important problems in medicine and biology, this course is for you.
GitHub repository¶
The course is open source and available on GitHub at https://github.com/junwei-lu/ai4med. You can find the source code, homework assignments, and other materials there.
Content structure¶
The main content of the book is organized into the following chapters:
-
Coding with AI: Covers the integration of AI tools in coding, including AI Copilot, Prompt Engineering, and various AI tools to enhance productivity.
-
Optimization: Introduces algorithms for solving optimization problems, including convexity, gradient descent, stochastic gradient descent, proximal gradient descent, and duality.
-
Neural Networks: Covers the fundamentals of deep learning, including neural networks, regularization, convolutional neural networks (CNNs), residual networks (ResNet), and computer vision.
-
Language Model Architecture: Explores the architecture of language models, including word vectors, attention mechanisms, Transformers, Hugging Face library, and parameter-efficient fine-tuning (PEFT).
-
Language Model Fine-Tuning: Focuses on fine-tuning large language models (LLMs), covering basics, PEFT, and Group Relative Policy Optimization (GRPO).
-
Foundation Models: Discusses building and pre-training foundation models, including coding tutorials and fine-tuning with Hugging Face.
-
Reinforcement Learning: Introduces reinforcement learning concepts such as Markov Decision Processes (MDP), policy gradients, and Proximal Policy Optimization (PPO).
-
Generative Models: Covers generative models including Langevin dynamics, diffusion models, and flow matching.
