TEL-750 - Digital Image Processing

The Digital Image Processing course introduces students to fundamental and advanced concepts, techniques, and algorithms used in image processing. It covers the theory and practical applications necessary to understand and manipulate digital images.

Key Topics Include:

  • Introduction to Digital Image Processing
    • History, applications, and fundamental steps in digital image processing.
    • Image sensing, acquisition, sampling, and quantization.
    • Basic relationships between pixels and mathematical tools.
  • Intensity Transformations and Spatial Filtering
    • Intensity transformation functions and histogram processing.
    • Low-pass (smoothing) and high-pass (sharpening) filters.
    • Combining spatial enhancement methods.
  • Filtering in the Frequency Domain
    • Two-dimensional Fourier Transform (2D-DFT) and its properties.
    • Frequency domain filters: low-pass, high-pass, and selective filtering.
    • Fast Fourier Transform (FFT) and applications.
  • Image Restoration and Reconstruction
    • Noise models and image degradation/restoration models.
    • Spatial and frequency domain noise reduction techniques.
    • Inverse filtering, Wiener filtering, and constrained least squares filtering.
  • Wavelet and Other Image Transforms
    • Basics of the wavelet transform and its applications.
    • Other transforms: Walsh-Hadamard, Haar, and matrix-based transformations.
  • Image Compression
    • Concepts of spatial and temporal redundancy.
    • Compression techniques: Huffman coding, arithmetic coding, block transform coding, predictive coding, and wavelet coding.
  • Morphological Image Processing
    • Operations: erosion, dilation, opening, closing, and hit-or-miss transform.
    • Algorithms for morphological reconstruction in binary and grayscale images.
  • Image Segmentation
    • Edge detection, thresholding, region growing, splitting, and merging.
    • Applications of segmentation in image analysis.
  • Feature Extraction
    • Boundary and region feature descriptors.
    • Whole-image features, scale-invariant feature transform (SIFT), and principal components.
  • Applications in Real-World Problems
    • Integration of theory and practical techniques for solving real-world image processing challenges.

Laboratory Topics

  • Processing of Monochrome Images:
    Students work with a Google Colab notebook that includes essential implementations and functions for converting and processing grayscale images. This foundational exercise introduces basic image manipulation techniques.

  • Intensity Transformations and Spatial Filtering:
    This lab involves applying intensity transformations and spatial filtering techniques using a Colab notebook. Students implement key operations for enhancing or modifying images in the spatial domain.

  • Frequency Domain Filtering:
    Using a Colab notebook, students experiment with basic frequency domain filters. They implement techniques such as low-pass and high-pass filtering using the Fourier Transform, learning to analyze and manipulate images in the frequency domain.

  • Image Restoration and Reconstruction:
    This lab focuses on techniques for reconstructing images degraded by noise or linear distortions. Students use Colab notebooks to apply both spatial and frequency domain restoration methods, gaining insights into noise reduction and quality improvement.

Available Resources

Slides of Theory: Comprehensive lecture slides are available for download, covering all course topics:

GitHub Repository: Access our dedicated repository for simulation programs in MATLAB/Octave and Python, categorized by chapter, to complement your understanding of theory and lab topics.

Student Projects

Students are encouraged to apply their knowledge through semester-long projects. Below are some examples of proposed projects for this course:

  1. Design and Implementation of an Automated Facial Recognition System:
    Create a system using OpenCV and TensorFlow to detect and recognize faces in real-time, including maintaining a database of authorized individuals for access control applications.
  2. Real-Time Object Detection for Autonomous Driving:
    Implement a system that uses pre-trained neural networks to identify and classify objects such as vehicles, pedestrians, and traffic signs for autonomous navigation.
  3. Health Monitoring for Plants:
    Develop a system using computer vision techniques to monitor plant health by detecting leaf diseases or stress. This project integrates image preprocessing and convolutional neural networks (CNNs).
  4. Hand Gesture Recognition and Finger Counting:
    Create a gesture-based control system using OpenCV and deep learning frameworks like TensorFlow to identify hand movements and count fingers in real-time.
  5. Traffic Light Recognition from Real-World Video Data:
    Build a system to detect and recognize traffic lights in video streams, addressing challenges such as motion blur and varying lighting conditions.
  6. Real-Time Pose Detection:
    Implement a body pose detection system to analyze and track human body movements using frameworks like OpenPose.

These projects allow students to deepen their understanding of image processing concepts while working on practical, innovative applications. Students submit their preferences and collaborate with the instructor to refine project scopes.

Equipment

  1. HuskyLens AI Camera
    The HuskyLens is an AI-powered camera used for tasks such as object detection, facial recognition, and gesture recognition. It simplifies image acquisition and preprocessing, making it an excellent tool for real-time applications like motion tracking and feature extraction.
  2. Raspberry Pi (Model 4 or 5)
    Raspberry Pi devices serve as versatile platforms for running lightweight image processing tasks. Students use them for projects involving image acquisition, preprocessing, and real-time analysis, such as plant health monitoring or basic object recognition.
  3. NVIDIA Jetson Nano and Xavier NX
    These high-performance computing platforms are ideal for deep learning applications in image processing. Students utilize these devices for tasks such as traffic analysis, pose detection, and real-time object tracking using convolutional neural networks (CNNs) and pre-trained models.
  4. Drones (e.g., DJI Tello)
    Drones equipped with cameras are used in projects involving aerial image acquisition and processing. Students experiment with tasks like detecting objects from aerial footage, analyzing terrain, or creating autonomous navigation algorithms.
  5. Arduino with Camera Modules
    Arduino boards, combined with camera modules, enable students to create embedded systems for tasks like gesture recognition or edge-based image processing. These platforms are often integrated with motors and other components for robotics applications.

Recommended Bibliography

The following resources are recommended for a deeper understanding of the concepts covered in the Digital Image Processing course:

  1. Dey S. Hands-On Image Processing with Python: Expert techniques for advanced image analysis and effective interpretation of image data, Packt Publishing, 2018.
  2. Solomon C., Breckon T. Fundamentals of Digital Image Processing: A Practical Approach with Examples in Matlab, Wiley-Blackwell, 2010.
  3. Shilkrot R., Escriva D. M. Mastering OpenCV 4: A Comprehensive Guide to Building Computer Vision and Image Processing Applications with C++, Packt Publishing, 2018.
  4. Petrou M., Petrou C. Image Processing: The Fundamentals (2nd Edition), Wiley-Blackwell, 2010.
  5. Gonzalez R. C., Woods R. E. Digital Image Processing Using MATLAB (2nd Edition), McGraw Hill, 2010.
  6. Solem J. E. Programming Computer Vision with Python: Tools and Algorithms for Analyzing Images, O’Reilly Media, 2012.

These resources are available in the university library or can be accessed through online platforms for supplementary reading. Students are encouraged to refer to these books for additional examples, explanations, and problem-solving techniques.

Lectures are held weekly on Fall Semester, Mondays 09:00 – 12:00 at Room K1.07.
Please refer to the official timetable on e-Class (access for registered users only) for the most up-to-date details

 
Weekly Schedule

Week 1:
Introduction to Digital Image Processing
Overview of the course, history, applications, and fundamental principles of image processing.

Week 2:
Fundamental Principles of Digital Images
Representation of digital images, sampling, quantization, and basic mathematical tools.

Week 3:
Processing of Binary Images
Techniques for handling and transforming binary images, including morphological operations.

Week 4:
Image Enhancement (Part A)
Intensity transformations, histogram processing, and spatial filtering techniques.

Week 5:
Image Enhancement (Part B)
Advanced techniques for smoothing, sharpening, and combining enhancement methods.

Week 6:
Frequency Domain Filtering (Part A)
Introduction to the Fourier Transform and its applications in image processing.

Week 7:
Frequency Domain Filtering (Part B)
Practical applications of low-pass and high-pass filters in the frequency domain.

Week 8:
Image Restoration (Part A)
Noise models and basic restoration techniques in the spatial domain.

Week 9:
Image Restoration (Part B)
Advanced restoration techniques, including Wiener filtering and constrained least squares.

Week 10:
Wavelet and Other Image Transforms
Introduction to wavelet transform and other transforms such as Haar and Walsh-Hadamard.

Week 11:
Image Compression
Concepts and methods of image compression, including predictive coding and wavelet-based techniques.

Week 12:
Morphological Image Processing
Operations such as erosion, dilation, opening, and closing, with practical applications.

Week 13:
Image Segmentation and Feature Extraction
Edge detection, region-based segmentation, and extraction of image features for recognition tasks.